DeepMind? Pffft! More like “dumb as a bag of rocks.”

Google’s DeepMind AI project, self-described as “the world leader in artificial intelligence research” was recently tested against the type of math test that 16 year olds take in the UK. The result? It only scored a 14 out of 40 correct. Womp womp!

“The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it.” (Medium)

Image result for home d'oh

Why It’s Hot

There is no shortage of angst by humans worried about losing their jobs to AI. Instead of feeling a reprieve, humans should take this as a sign that AI might just be best designed to complement human judgements and not to replace them.

google AI predicts heart attacks by scanning your eye…

This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.

They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.

As explained via The Verge:

“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Why It’s Hot:

This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.

[Source]

zero training = zero problem, for AlphaGo Zero…


One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”