DeepMind? Pffft! More like “dumb as a bag of rocks.”

Google’s DeepMind AI project, self-described as “the world leader in artificial intelligence research” was recently tested against the type of math test that 16 year olds take in the UK. The result? It only scored a 14 out of 40 correct. Womp womp!

“The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it.” (Medium)

Image result for home d'oh

Why It’s Hot

There is no shortage of angst by humans worried about losing their jobs to AI. Instead of feeling a reprieve, humans should take this as a sign that AI might just be best designed to complement human judgements and not to replace them.

zero training = zero problem, for AlphaGo Zero…


One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”

Google’s AI Bot Wins Again!

Proving that it’s no fluke, Google’s artificial intelligence program AlphaGo won its second game of Go yesterday by beating the #2 human Go champion in the world. Go is described as the world’s most complicated game and it was thought that humans would still prevail when matched against AlphaGo.

As reported in cnet.com, world champion Lee Sedol said, “Yesterday I was surprised (at losing) but today it’s more than that, I am quite speechless.” Two wins in a row was virtually unthinkable. The match is being held in South Korea as part of the Google DeepMinds Challenge. Millions of people around the world are watching as part of a live stream of this five-game competition.

“To put it in context, it’s a game for people who think chess is too easy. The victory has also come as a surprise to everyone, as it wasn’t thought that artificial intelligence, the science of computers that more closely mimic human smarts, was ready to take on humans at Go just yet. It’s a sign that AlphaGo is smarter than we thought.”

Why It’s Hot

AI was not thought to have advanced to the level of winning Go, the world’s most complex game. But it’s now done so twice. Time to worry about AI vs the human brain? According to Mark Zuckerberg, we have nothing to fear. As mentioned in the cnet.com article, he pointed out that we’re “nowhere near understanding how intelligence actually works,” never mind replicating and beating it.