Showing posts with label Google. Show all posts
Showing posts with label Google. Show all posts

Thursday 10 March 2016

Google's DeepMind Computer Wins Second Round Against Lee Sedol


Seoul,Mar 10 (Prensa Latina) Google has confirmed its AlphaGo computer has taken another victory against human opponent and champion Lee Sedol.
It is the second of five matches pitting DeepMind's artificial intelligence program against the South Korean expert, with the winner taking home $1milllion (Âú706,388).

AlphaGo won the first match by resignation after 186 moves.

While there are still three games left in the Challenge Match, this marks the first time in history that a computer program has defeated a top-ranked human Go player on a full 19x19 board with no handicap twice in a row.

The winner of the Challenge Match must win at least three of the five games in the tournament, so today's result does not set the final outcome. AlphaGo won the first match by resignation after 186 moves.

The next game will be March 12 at 1pm (4am GMT/8pm PT/11pm ET) Korea Standard Time, followed by games on March 13, and March 15.

Go has been described as one of the 'most complex games ever devised by man' and has trillions of possible moves, but Google recently stunned the world by announcing its AI software had beaten one of the game's grandmasters.

The program recently beat a Chinese grandmaster five games to nothing.

The game, which was first played in China and is far harder than chess, had been regarded as an outstanding 'grand challenge' for artificial intelligence - until now.

Wednesday 9 March 2016

Why Google's Go win against Lee Se-dol is massive?

THE VERGE

DeepMind’s dramatic victory over legendary Go player Lee Se-dol earlier today is a huge moment in the history of artificial intelligence, and something many predicted would be decades away. "I was very surprised," says Lee. "I didn't expect to lose. I didn't think AlphaGo would play the game in such a perfect manner."
But why is it so impressive that DeepMind’s AlphaGo program — backed by the might of Google — has beaten one of the game’s most celebrated figures? To understand that, you have to understand the game's roots, and how the DeepMind team has built AlphaGo to uproot them.
Go, known as weiqi in China, igo in Japan, and baduk in Korea, is an abstract board game that dates back nearly 3,000 years. It’s a game of strategy played across a 19 x 19 grid; players take turns placing black and white stones to surround points on the grid and capture their opponent’s territory. Although the ruleset is very small, it creates a challenge 

"It’s one of the great intellectual mind sports of the world," says Toby Manning, treasurer of the British Go Association and referee of AlphaGo’s victory over European champion Fan Hui last year. "It’s got extremely simple rules, but these rules give rise to an awful lot of complexity." Manning cites a classic quote from noted 20th-century chess and Go player Edward Lasker: "While the baroque rules of chess could only have been created by humans, the rules of Go are so elegant, organic, and rigorously logical that if intelligent life forms exist elsewhere in the universe, they almost certainly play Go."

Because of Go’s deep intricacy, human players become experts through years of practice, honing their intuition and learning to recognize gameplay patterns. "The immediate appeal is that the rules are simple and easy to understand, but then the long-term appeal is that you can’t get tired of this game because there is such a depth," says Korea Baduk Association secretary general Lee Ha-jin (above). "Although you are spending so much time, there is always something new to learn and you feel that you can get better and stronger."
After starting to play the game at five years old, Lee Ha-jin displayed such a level of talent that her parents decided to send her to a private Go school in Seoul. She lived with her teacher, went to regular school in the daytime, then came back and played Go for several hours every night. Lee eventually turned professional at the age of 16.
A visit to her current workplace, the Korea Baduk Association, illustrates the game’s stature in this country. Members of the Korea Women Baduk League play out matches in stoic silence on one floor. Another floor hosts a room stacked with storied trophies, many of which are slightly creepy disembodied hands. (One old metaphorical name for the game translates as "hand talk.") And in the basement, there’s a full-fledged operating center for Baduk TV, a cable channel dedicated to Go. One of its studios has a mock-up stage for the AlphaGo showdown, where the channel can reenact the matches and provide extra analysis.
Every Go player I’ve spoken to says the same thing about the game: its appeal lies in depth through simplicity. And that also gets to the heart of why it’s so difficult for computers to master. There’s limited data available just from looking at the board, and choosing a good move demands a great deal of intuition.
ALPHAGO GETS BETTER BY PLAYING ITSELF
"Chess and checkers do not need sophisticated evaluation functions," says Jonathan Schaeffer, a computer scientist at the University of Alberta who wrote Chinook, the first program to solve checkers. "Simple heuristics get most of what you need. For example, in chess and checkers the value of material dominates other pieces of knowledge — if I have a rook more than you in chess, then I am almost always winning. Go has no dominant heuristics. From the human's point of view, the knowledge is pattern-based, complex, and hard to program. Until AlphaGo, no one had been able to build an effective evaluation function."

So how did DeepMind do it? AlphaGo uses deep learning and neural networks to essentially teach itself to play. Just as Google Photos lets you search for all your pictures with a cat in them because it holds the memory of countless cat images that have been processed down to the pixel level, AlphaGo’s intelligence is based on it having been shown millions of Go positions and moves from human-played games.
The twist is that DeepMind continually reinforces and improves the system’s ability by making it play millions of games against tweaked versions of itself. This trains a "policy" network to help AlphaGo predict the next moves, which in turn trains a "value" network to ascertain and evaluate those positions. AlphaGo looks ahead at possible moves and permutations, going through various eventualities before selecting the one it deems most likely to succeed. The combined neural nets save AlphaGo from doing excess work: the policy network helps reduce the breadth of moves to search, while the value network saves it from having to internally play out the entirety of each match to come to a conclusion.
This reinforced learning system makes AlphaGo a lot more human-like and, well, artificially intelligent than something like IBM’s Deep Blue, which beat chess grandmaster Garry Kasparov by using brute force computing power to search for the best moves — something that just isn’t practical with Go. It’s also why DeepMind can’t tweak AlphaGo in between matches this week, and since the system only improves by teaching itself, the single match each day isn’t going to make a dent in its learning. DeepMind founder Demis Hassabis says that although AlphaGo has improved since beating Fan Hui in October, it’s using roughly the same computing power for the Lee Se-dol matches, having already hit a point of diminishing returns in that regard.




That’s not to say that AlphaGo as it exists today would be a better system for chess, according to one of Deep Blue’s creators. "I suspect that it could perhaps produce a program that is superior to all human grandmasters," says IBM research scientist Murray Campbell, who describes AlphaGo as a "very impressive" program. "But I don’t think it would be state of the art, and why I say that is that chess is a qualitatively different game on the search side — search is much more important in chess than it is in Go. There are certainly parts of Go that require very deep search but it’s more a game about intuition and evaluation of features and seeing how they interact. In chess there’s really no substitute for search, and modern programs — the best program I know is a program called Komodo — it’s incredibly efficient at searching through the many possible moves and searching incredibly deeply as well. I think it would be difficult for a general mechanism had it been created in AlphaGo and applied to chess, I just don’t think it’d be able to recreate that search and it’d need another breakthrough."

DeepMind, however, believes that the principles it uses in AlphaGo have broader applications than just Go. Hassabis makes a distinction between "narrow" AIs like Deep Blue and artificial "general" intelligence (AGI), the latter being more flexible and adaptive. Ultimately the Google unit thinks its machine learning techniques will be useful in robotics, smartphone assistant systems, and healthcare; last month DeepMind announced that it had struck a deal with the UK’s National Health Service.

Today, though, the focus is on Go, and with good reason — the first victory over Lee Se-dol is major news even if AlphaGo loses the next four matches. "Go would lose one big weapon," Lee Ha-jin told me last week when asked about what defeat for Lee Se-dol would mean for the game at large. "We were always so proud that Go was the only game that can not be defeated by computers, but we wouldn’t be able to say that any more, so that would be a little disappointing."
"WE’RE ABSOLUTELY IN SHOCK."
But AlphaGo could also open up new avenues for the game. Members of the Go community are as stunned with the inventive, aggressive way AlphaGo won as the fact that it did at all. "There were some moves at the beginning — what would you say about those three moves on the right on the fifth line?" American Go Association president asked VP of operations Andrew Jackson, who also happens to be a Google software engineer, at the venue following the match. "As it pushes from behind?" Jackson replied. "If I made those same moves…" Okun continued. "Our teachers would slap our wrists," Jackson agreed. "They’d smack me!" says Okun. "You don’t push from behind on the fifth line!"
"We’re absolutely in shock," said Jackson. "There’s a real question, though. We’ve got this established Go orthodoxy, so what’s this going to reveal to us next? Is it going to shake things up? Are we going to find these things that we thought were true — these things you think you know and they just ain’t so?"

GOOGLE DEEPMIND CHALLENGE : A GAME BETWEEN A COMPUTER AND A GENIUS


On Wednesday afternoon in the South Korean capital, Seoul, Lee Se-dol, the 33-year-old master of the ancient Asian board game Go, will sit down to defend humanity.
On the other side of the table will be his opponent: Alphago, a programme built by Google subsidiary DeepMind which became, in October, the first machine to beat a professional human Go player, the European champion Fan Hui. That match proved that Alphago could hold its own against the best; this one will demonstrate whether “the best” have to relinquish that title entirely.
Lee, who is regularly ranked among the top three players alive, has been a Go professional for 21 years; Alphago won its first such match less than 21 weeks ago. Despite that, the computer has already played more games of Go than Lee could hope to fit in his life if he lived to a hundred, and it’s good. Very good.

READ ALSO
GOOGLE ADMITS RESPONSIBILITY AFTER CAR CRASH

Google photos get new editing tools

Tech industry rallies around Apple in Fight against FBI

At the press conference confirming the details of the match, Lee exuded confidence. “I don’t think it will be a very close match,” he told the assembled crowd with a sheepish grin. “I believe it will be 5–0, or maybe 4–1. So the critical point for me will be to not lose one match.”
DeepMind thinks otherwise. The company was founded by Demis Hassabis, a 39-year-old Brit who started the artificial intelligence (AI) research firm after a varied career taking in a neuroscience PhD, blockbuster video game development, and master-level chess – and he puts its chances of winning the match at around 50–50.
Clearly, one of them is wrong. Either Lee has vastly overestimated his chances against a new breed of AI, or Hassabis and company still don’t understand quite how powerful a player they are up against. But the answer to that, revealed over the course of five matches throughout the week, will have ramifications far beyond the world of Go.
The ancient Asian game of Go
On the surface, Go looks simple. Compared with chess – which has six different types of pieces, each with different movement rules, and fiddly additions such as castling and promotion – a Go board is the height of elegance.
Each player takes it in turns placing stones of their colour on a 19-by-19 board, attempting to surround and thus capture their opponent’s pieces. The player who has taken the most territory, by surrounding or occupying it with their own stones, at the end of the game is the winner.
But the simplicity of the ruleset belies the astonishing complexity that the game can demonstrate. The first move of a game of chess offers 28 possibilities; the first move of a game of Go can involve placing the stone in one of 361 positions. A game of chess lasts around 80 turns, while Go games last 150. That leads to a staggering number of possibilities: there are more legal board states for a game of Go or chess than there are atoms in the universe.
And so both chess and go are resistant to the tactic by which simpler games, such as noughts and crosses or draughts (tic-tac-toe and checkers, to Americans), have been “solved”: by enumerating every possible move, and drawing up rules for how to guarantee that a computer will be able to play to at least a draw. Each game is just too complex.

CHECK ALSO
Developers can now protect their Android Apps throughout their lifecycle

Chess computers can at least rely on a modified version of the same tactic. Such machines, including Deep Blue – the computer made by IBM which beat grandmaster Gary Kasparov in 1997, ushering in an age of dominance by computers in chess – rely on calculating and then judging the value of vast numbers of possible moves. Deep Blue, for instance, could evaluate 200m possible moves in a second. Those machines play by looking into the future, to find the set of moves that will lead them to the strongest position, and then playing them out step by step.
That tactic doesn’t work for Go. Partly, that’s because of one further complication in the game: the immense difficulty of actually evaluating a move. A chess player can easily look at a board and see who is in the stronger position, often simply by counting the number of pieces on the board held by each player.
In Go, such an approach was long thought impossible. And even if that problem could be solved, the sheer scale of the game meant that exhaustively searching through every possible move left the machine far from competitive with even a weak human player. As a result, as recently as 2014, a leading developer of Go software estimated it would be a decade before a machine could beat a professional player.
In fact, it was less than a year.
‘Deep reinforcement learning’
DeepMind approached the problem
by seeing whether the company could teach a neural network to play Go. The technology, which began with attempts to mimic the way the human brain interprets and processes information, is at the heart of DeepMind’s AI research, and lends itself well to what Hassabis, speaking on the eve of his trip to Seoul to oversee the competition, calls “deep reinforcement learning”.
“It’s the combination of deep learning, neural network stuff, with reinforcement learning,” he explains. “Learning by trial and error, incrementally improving, and learning from your mistakes.”
DeepMind had already used the technique successfully when it built a system capable of learning how to play old Atari video games. But thought rapidly turned to a greater challenge, and one which had for a long time represented a holy grail of AI research. Just two months after the Atari research was published, the team got its initial results on the Go project, Hassabis says. “Then we felt, when we assessed it, that if we put a serious team on to it we could make some pretty fast progress.”
The idea of applying neural networks to solve tricky problems in AI isn’t confined to DeepMind, but the technology is notoriously tricky to refine. Hassabis likens it to teaching a child, rather than programming a computer: even if the team knows what needs to be changed, they can’t simply add a line of code. Instead, they need to show the software enough examples of correct behaviour for it to draw its own inferences.
But DeepMind did hit upon a few genuine breakthroughs. “The big jump was the discovery of the value network, which was last summer,” Hassabis says. That was the realisation that a finely tuned neural network could solve one of the problems previously thought impossible, and learn to predict the winner of a game by looking at the board.
From there, progress was rapid. The value network, paired with a second neural network, the policy network, would work to pick a few possible moves (based on similar plays seen in previous matches) and then estimate which of the resulting board states would be strongest for the AlphaGo player.
The second neural network works differently. Called the policy network, it was trained on thousands of matches played by go professionals, with the aim of predicting where they would play the next move. It managed to achieve success 57% of the time, allowing it to very quickly reach a level of competency near that of the best humans.
The policy network on its own is good enough, according to DeepMind, to beat every other go software on the market. But it’s when the two neural networks work in concert that AlphaGo really shines. Meanwhile, a third tool, called Monte Carlo tree search, helps the system play strategically as well as tactically.
IBM didn’t publish the paper for years, and then dismantled Deep Blue. They did a few things that fuelled the paranoia. - Demis Hassabis, DeepMind CEO
Lee’s overconfidence, says Hassabis, is because he hasn’t seen the most recent progress. “He’s very confident, because he looked at the Fan Hui version” that played in October. “And clearly, if we were to play that, he would thrash it.
“I think he’s basing it off that, plus some approximation of how much it might have improved … All I can say is that our tests are leading us to believe that we have a good chance.” As for Lee’s trash talk, Hassabis counters in his own style. “I would be very disappointed if we didn’t win a game – put it that way.”
Could DeepMind be the watershed moment for artificial intelligence?
If DeepMind does win the match, it will be a watershed moment for AI with only one genuine precedent: Deep Blue’s victory over Kasparov in 1997. Hassabis’ chess days were over by then, but he followed the match as closely as he could – given that it fell weeks before his computer science finals at Cambridge (he graduated with a double first).
He recalls being surprised by Deep Blue’s success. “He would say this himself, of course, but I think he was probably at that stage still slightly stronger than Deep Blue. As we know now it was just a matter of time, but at that stage it still wasn’t clear.”
That match was won with the slightest of margins, though Deep Blue’s occasionally erratic play style led to controversy, with Kasparov publicly accusing IBM of cheating in the match. It’s a conflict DeepMind is eager to avoid, and part of the reason the team published its ground-breaking Nature paper, detailing the inner workings of AlphaGo, in advance.
“If you wanted to, with enough effort you could probably recreate AlphaGo from that paper in about a year, if you put enough people on it,” Hassabis says. “Whereas IBM didn’t publish the paper for another five to ten years afterwards, and then they dismantled Deep Blue. So they did a few things that didn’t help, that fuelled the paranoia.”
What impact would a DeepMind victory have?
The chess world has had two decades to live with the fallout of Deep Blue’s victory over Kasparov. But Frederic Friedel, a computer chess pioneer and the founder of the news site ChessBase, argues that it’s possible to overstate the effect the victory had. “AlphaGo winning won’t change the world of Go. It’s like you’ve built a bicycle or a car that can go faster than Usain Bolt, and you say: ‘Look at how fast it is!,” does this mean the world ends for athletics? No, it doesn’t.”
Friedel, who first met Hassabis as “a cocky little kid who came for a dinner with Gary [Kasparov] and myself in London, and told us about some software he was developing”, does have a warning for Go players, though. “The advent of bicycles and motorbikes did not make athletes give up in despair: they just went on racing each other without these machines. But there is a grave difference to the chess analogy: a 200-metre runner cannot secretly use the assistance of a bicycle, but a chess player can most certainly get his moves surreptitiously from a computer.
“Cheating in chess is becoming a serious problem, and it will become more acute as technology progresses. That will change the game dramatically – not the fact that computers are stronger than humans.”
Thinking about what comes after the match is one step too far for Hassabis and DeepMind, who are focusing everything they have on the next two weeks. If they win, attention will probably turn to cleaning up AlphaGo in preparation for a consumer release, and Hassabis hopes that a highly skilled Go programme could be an important step in popularising the game in the west, where would-be stars are often hampered by the lack of opponents to test their mettle against.
And the company has already turned its attention to other, more practical, problems which can be tackled with the same deep reinforcement learning approach that led to AlphaGo. In the short term, that means helping parent company Google with tricky challenges like voice and image recognition, while in the next five or ten years, Hassabis says, “ultimately we want to apply these techniques in important real-world problems, from medical diagnostics to climate modelling”.
But if AlphaGo wins its match against Lee Se-dol, it will mean much more than just a stepping stone in DeepMind’s own progress. One of the last areas of mental competition in which humanity had an advantage over machines will have been vanquished. If you still think you’re better than an AI, now is the time to think again.

Tuesday 1 March 2016

Google admits responsiblity after self-driving car hit bus

Alphabet Inc’s Google said on Monday it bears “some responsibility” after one of its self-driving cars struck a municipal bus in a minor crash earlier this month.
The crash may be the first case of one of its autonomous cars hitting another vehicle and making an error.
The Mountain View, California-based Internet search leader and tech firm said it updated its software after the crash to avoid future incidents.
In a Feb. 23 report filed with California regulators, Google said the crash took place in Mountain View on Feb. 14 when a self-driving Lexus RX450h sought to get around some sandbags in a wide lane.
Google said in the filing the autonomous vehicle was traveling at less than 2 miles per hour, while the bus was moving at about 15 miles per hour.
The vehicle and the test driver “believed the bus would slow or allow the Google (autonomous vehicle) to continue,” it said.
But three seconds later, as the Google car in autonomous mode re-entered the center of the lane, it struck the side of the bus, causing damage to the left front fender, front wheel and a driver side sensor. No one was injured.

Google said in a statement on Monday that “we clearly bear some responsibility, because if our car hadn’t moved, there wouldn’t have been a collision. That said, our test driver believed the bus was going to slow or stop to allow us to merge into the traffic, and that there would be sufficient space to do that.”

The company also said it has reviewed this incident “and thousands of variations on it in our simulator in detail and made refinements to our software. From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”
There has been no official determination of fault in the crash. Google has previously said that its autonomous vehicles have never been at fault in any crashes.
The Mountain View Police Department said no police report was filed in the incident.
Stacey Hendler Ross, spokeswoman for the Santa Clara Valley Transportation Authority, which operates municipal buses in Mountain View, confirmed the incident occurred, but said she did not know any details.
A spokesman for the California Department of Motor Vehicles said on Monday it will speak to Google to gather additional information, but added “the DMV is not responsible for determining fault.”
A spokesman for the U.S. National Highway Traffic Safety Administration declined to comment.
The crash comes as Google has been making the case that it should be able to test vehicles without steering wheels and other controls.
In December, Google criticized California for proposing regulations that would require autonomous cars to have a steering wheel, throttle and brake pedals when operating on public roads. A licensed driver would need to be ready to take over if something went wrong.
Google said in November that in six years of its self-driving project, it has been involved in 17 minor accidents during more than two million miles of autonomous and manual driving combined.
“Not once was the self-driving car the cause of the accident,” Google said at the time.



MY AD 2