In 1997, IBM's Deep Blue computer famously defeated world
chess champion Garry Kasparov. Now, thanks to Google, a computer can beat you at the ancient board game Go.
Google
DeepMind, the search giant's London-based artificial intelligence arm,
has developed a program called AlphaGo that can beat a human player at
the two-person board game. In fact, the program managed to sweep
European champion Fan Hui in a five-game match, the first time a
computer has defeated a professional player in the full-size game of Go.
Go's
complex and abstract nature makes it a more challenging AI project than
designing a similar program for chess. The strategy game, which
originated in China thousands of years ago, is played on a 19x19 grid
with black and white stones and challenges players to occupy more of the
board than their opponent.
A Go board can be configured in a constellation of
arrangements that outnumbers the atoms in the universe, according to
Demis Hassabis, who runs Google DeepMind. The vast number of moves and
outcomes makes it more challenging to artificial intelligence than
chess, he said.
Beating a Go
champion achieves "one of the long standing grand challenges of AI,"
said Hassabis on a conference call Tuesday. His team published its work
Wednesday in the international science journal Nature.
AlphaGo's
success at the ancient game comes as artificial intelligence moves from
scientific curiosity to real-world applicability. AI's potential has
worried some technologists, including SpaceX CEO Elon Musk and Microsoft co-founder Bill Gates, who worry about its potential dangers. In August 2014, Musk expressed fears that AI could be more dangerous than nuclear weapons. Even famed physicist Stephen Hawking has voiced reservations about AI.
Google
is "very cognizant about ethical issues," said Hassabis, adding the
company, which has re-organized under holding company Alphabet, agreed
not to use DeepMind's technology for military purposes. Google bought DeepMind in 2014.
AlphaGo
uses a combination of machine learning technologies called neural
networks and tree search. If you love complex algorithms, feel free to dig into the way AlphaGo plays and learns in Nature. Facebook's AI team has also been developing a program that can play the game.
"Scientists have been trying to teach computers to win at Go for 20 years," said Facebook CEO Mark Zuckerberg in a post Tuesday.
"We're getting close, and in the past six months we've built an AI that
can make moves in as fast as 0.1 seconds and still be as good as
previous systems that took years to build."
DeepMind said the
same methods used to master Go could one day be used for tasks like
climate modeling and disease analysis. In the short term, aspects of the
technology behind AlphaGo could show up in the digital assistant on
your phone or in the recommendation app you use to pick a restaurant in
the next year or two, said Hassabis.
AlphaGo's next challenge will be to play Lee Sedol, one of the top Go players in the world, in Seoul in March.
No comments:
Post a Comment