google AI predicts heart attacks by scanning your eye…

This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.

They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.

As explained via The Verge:

“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Why It’s Hot:

This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.


Astronomers Using AI to Analyze the Universe – Fast

The next generation of powerful telescopes will scan millions of stars and generate massive amounts of data that astronomers will be tasked with analyzing. That’s way too much data for people to sift through and model themselves — so astronomers are turning to AI to help them do it.

How they’re using it:

1) Coordinate telescopes. The large telescopes that will survey the sky will be looking for transient events — new signals or sources that “go bump in the night,” says Los Alamos National Laboratory’s Tom Vestrand.

2) Analyze data. Every 30 minutes for two years, NASA’s new Transiting Exoplanet Survey Satellite will send back full frame photos of almost half the sky, giving astronomers some 20 million stars to analyze. Over 10 years there will be 50 million gigabytes of raw data collected.

3) Mine data. “Most astronomy data is thrown away but some can hold deep physical information that we don’t know how to extract,” says Joshua Peek from the Space Telescope Science Institute.

Why it’s hot:

Algorithms have helped astronomers for a while, but recent advances in AI — especially image recognition and faster, more inexpensive computing power —mean the techniques can be used by more researchers. The new AI will automate the process and be able to understand and identify things that humans may not even know exists or begin to understand.

 “How do you write software to discover things that you don’t know how to describe?There are normal unusual events, but what about the ones we don’t even know about? How do you handle those? That will be where real discoveries happen because by definition you don’t know what they are.” – Tom Vestrand National Laboratory




dragon drive: jarvis for your car…

The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.

According to Digital Trends

“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”

Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.

Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).

Why It’s Hot:

Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.

[More info]

From smart homes to smart offices: Meet Alexa for Business

During AWS Reinvent Conference in Las Vegas, Amazon announced Alexa for Business Platform, along with a set of initial partners that have developed specific “skills” for business customers.

Their main goal seems to be aimed at making Alexa a key component to office workers:

– The first focus for Alexa for Business is the conference room. AWS is working with the likes of Polycom and other video and audio conferencing providers to enable this.

– Other partners are Microsoft ( to enable better support for its suite of productivity services) Concur (travel expenses) and Splunk ( big data generated by your technology infrastructure, security systems, and business applications), Capital One and Wework. 

But that’s just what they are planning to offer and the new platform will also let companies build out their own skills and integrations.

Why It’s hot: 
We are finally seeing those technologies give a step to being actually useful and mainstream. 
Since Amazon wants to integrate Alexa to other platforms, It can be an interesting tool for future innovations. 
Source: TechCrunch

zero training = zero problem, for AlphaGo Zero…

One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”

Robot, a kid’s best friend?

Robots are making their way into schools and education to help children lower their stress and boost their creativity. Among those who have diseases such as diabetes and autism, robots can even help restore their self-confidence.

One research shows that autism children engage better with robots than humans because they are simple and predictable.

Another research that works with children with diabetes makes their robots “imperfect” and have them make mistakes so they don’t intimidate the children. Children learn that they don’t have to be perfect all the time.

Why it’s hot (or not): are robots the right companions for children? What impact would it have on human interactions if children are exposed to AI at such a young age?



The Countdown Begins…AI Versus the World

AI is continuing to rule the press headlines across all industries. No matter who you are or what you do, your life will somehow be affected by artificial intelligence. Below are just a few charts recently published by the Electronic Frontier Foundation on how quickly AI is catching up with humans.

Why It’s Hot:

Artificial intelligence will continue to get better over time. So much so that Researchers at Oxford and Yale  predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.

googler creates AI that creates video using one image…

One of the brilliant minds at Google has developed an algorithm that can (and has) create video from a single image. The AI does this by predicting what each of the next frames would be based on the previous one, and in this instance did it 100,000 times to produce the 56 minute long video you see above. Per its creator:

“I used videos recorded from trains windows, with landscapes that moves from right to left and trained a Machine Learning (ML) algorithm with it. What you see at the beginning is what the algorithm produced after very little learnings. It learns more and more during the video, that’s why there are more and more realistic details. Learnings is updated every 20s. The results are low resolution, blurry, and not realistic most of the time. But it resonates with the feeling I have when I travel in a train. It means that the algorithm learned the patterns needed to create this feeling. Unlike classical computer generated content, these patterns are not chosen or written by a software engineer.

Why it’s hot:

Creativity and imagination have been among the most inimitable human qualities since forever. And anyone who’s ever created anything remotely artistic will tell you inspiration isn’t as easy as hitting ‘go’. While this demonstration looks more like something you’d see presented as an art school video project than a timeless social commentary regaled in a museum, it made me wonder – what if bots created art? Would artists compete with them? Would they give up their pursuit because bots can create at the touch of a button? Would this spawn a whole new area of human creativity out of the emotion of having your work held up next to programmatic art? Could artificial intelligence ever create something held up against real human creativity?

better living partying through chemistry technology…

[about 2:05-2:45 should do it]

It’s not just a clever name, PartyBOT is your “Artificial Dance Assistant”, or ADA for short. Debuted at SxSW last month, ADA learns about party-goers musical tastes, drink preferences, and social savvy. Then, it uses facial and voice recognition to monitor the room, playing tunes tailored to the interests of those who aren’t partying hard enough as determined by their expressions and conversations. As described by its creators…

“The users’ relationship with the bot begins on a mobile application, where—through a facial recognition activity—the bot will learn to recognize the user and their emotions. Then, the bot will converse with the user about party staples—music, dancing, drinking and socializing—to learn about them and, most importantly, gauge their party potential. (Are they going to be a dance machine or a stick in the mud—the bot, as a bouncer of sorts, is here to find out.)

Upon arrival at the bar, the user will be recognized by PartyBOT, and throughout the party, the bot will work to ensure a personalized experience based on what it knows about them—their favorite music, beverages, and more. (For example, they might receive a notification when the DJ is playing one of their favorite songs.)”

Why it’s hot:

Obviously this was an intentionally lighthearted demonstration of using bot and other technology to improve an experience for people. Apart from knowing you like Bay Breezes, imagine the improved relationship brands could have with their customers by gathering initial preferences and using those to tailor experiences to each individual. Many times bots are thought of in very simplistic – question/answer/solution form – but this shows how combining AI with other emerging technologies can make for a much more personally exciting overall experience.

Today, To die. Google home can recognize multiple voices

Google Home can now be trained to identify the different voices of people you live with. Today Google announced that its smart speaker can support up to six different accounts on the same device. The addition of multi-user support means that Google Home will now tailor its answers for each person and know which account to pull data from based on their voice. No more hearing someone else’s calendar appointments.

So how does it work? When you connect your account on a Google Home, we ask you to say the phrases “Ok Google” and “Hey Google” two times each. Those phrases are then analyzed by a neural network, which can detect certain characteristics of a person’s voice. From that point on, any time you say “Ok Google” or “Hey Google” to your Google Home, the neural network will compare the sound of your voice to its previous analysis so it can understand if it’s you speaking or not. This comparison takes place only on your device, in a matter of milliseconds.

Why it’s hot?
-Everyone in the family gets a personal assistant.
-Imagine how it might work in a small business / office
-Once it starts recognizing more than six voices, can every department have its own AI assistant?

Read your EKG Instantly on your Phone with KARDIA

The world knows no deadlier assassin than heart disease. It accounts for one in four fatalities in the US. Early detection remains the key to saving lives, but catching problems at the right time too often relies upon dumb luck. The most effective way of identifying problems involves an EKG machine, a bulky device with electrodes and wires.

Most people visit a doctor for an electrocardiogram. That, too, is no guarantee, because the best detection means being tested when a potential problem reveals itself. Otherwise, early signs of heart disease might go undetected.

At-risk patients might find a compact, easy to use EKG machine a good option. Like so many other gadgets, portable EKG machines are getting ever smaller—just look at products like Zio, HeartCheck, and QuardioCore.

The Kardia from AliveCor is about the width of two sticks of gum. Stick the $100 device on the back of your phone or slip it into your wallet, place a few fingers on it for 30 seconds, and you’ve got a medical-grade EKG reading on your phone.












But the bigger story is not in the gadget’s size, but in what happens with the heart data it collects. The company uses neural networks and algorithms to identify signs of heart disease, an approach it hopes might change how cardiologists diagnose patients.

The company was successful at convincing FDA, MayoClinic and the investors that devices’ ease of use will lead to more frequent testing and increase the likelihood of early detection. About a month of use builds a heart profile and then Kardia’s data-driven algorithm can detect if something goes amiss. Your doctor receives a message only when the anomaly is detected.

Why it is hot: Future of diagnostics is in data-driven approach. With IBM Watson and other innovations in machine learning, we are up for a healthier future!!!


Dr. AI Helps Patients Gain Access to Clinical Expertise About Their Condition

According to an article from Access AI, HealthTap is introducing an artificial intelligence engine to triage cases automatically. Doctor A.I., is a personal AI-powered physician that provides patient with doctor recommended insights.

More than a billion people search the web for health information each year, with approximately 10 billion symptom related searches on Google alone. While many resources provide useful information, web search results can only provide content semantically related to symptoms. The new function from HealthTap aims to incorporate context and clinical expertise of doctors who have helped triage hundreds of millions of patients worldwide to provide the most effective course of treatment. Dr.A.I. uses HealthTap’s Health Operating System to analyse user’s current symptoms and cross checks this with the data provided from the personal health record they have created. Based on solutions that it has uncovered from its data, Dr.A.I. will tailor pathways ranging from suggesting the patient reads relevant doctor insights and content, to connecting the patient with a doctor for a live virtual consult, or from scheduling an in-person office visit with the right specialist, all the way to directing the patient to more urgent care, based on the patient’s symptoms and characteristics.

Why It’s Hot

At first glance, the apps looks like WebMD. Patients input their symptoms using a visual interface and the app spits back a diagnosis. Where this app differs though in the level of personalized recommendations that follow the diagnosis.

Through our SENSE and Journey Mapping work across our pharma clients, we know that patients are consulting Dr. Google both before and after they are diagnosed with a condition and prescribed a treatment where they are exposed to virtually limitless information about the condition and drug they’ve been prescribed from all kinds of sources, whether they have clinical expertise or not. In some severe cases, this can even stop patients from filling that prescription and taking the drug do to fear of side effects, intimating costs of the drug/lack of coverage, anxiety around administering the drug and on top of all that, apprehension that this is the correct treatment for them. Dr. AI has the potential circumvent a lot of that behavior by providing clinical expertise about the condition using the same deductive approach as HCP’s in a patient-focused interface.

Fake News Challenge: Using AI To Crush Fake News

The Fake News Challenge is a grassroots competition of over 100 volunteers and 71 teams from academia and industry, to find solutions to the problem of fake news.

The competition is designed to foster the development of new tools to help human fact checkers identify real news from fake news using machine learning, natural langauge processing, and AI.

 Why its hot:

  • When everyone can create content anywhere, its important that truth be validated and for misinformation to be identified.
  • This is an immensely important and complex task executed as a global hackathon spread over 6 months. Big challenges can be approached in new ways.
  • This challenge will result in new tools that could make its way into our publishing platforms, our social networks, etc – is this potentially good or bad for us?


Human-like robots edge closer to reality

If you’ve lived in fear of a futuristic robot rebellion, the newest creation from Google-owned Boston Dynamics won’t do much to ease your fears. The Atlas humanoid robot is probably the most lifelike, agile and resilient robot built to date.  As the video shows, it can walk on snow and keep its balance, open doors, stack 10-pound boxes on shelves and even pick itself up from the floor after being knocked down. And that’s where things get a little frightening.

Even though this is only a demonstration, Atlas’ handler abuses it by knocking boxes out of its hands and then shoving it in the back with a stick so it falls on the floor. But much like a ninja fighter, it springs back up and keeps on going. If you hearken back to Robo-cop, all this robot needs is a weapon to turn the tables on its human tormentor.

Why It’s Hot

Robots such as Atlas will some day be doing much of the back-breaking labor humans now do — picking crops, construction, fire fighting. But as the author of the article where this appeared says, “Elon Musk once warned that Skynet (the evil artificial intelligence from the Terminator movies) could only be a few years off, and Google is increasingly looking like Skynet.” So while Atlas may act pretty cool and have good applications, it does have its ominous side.

Researchers create ‘self-aware’ Super Mario with artificial intelligence

Mario just got a lot a smarter.

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

The Mario Lives project was created by a team of researchers out of Germany’s University of Tübingen as part of the Association for the Advancement of Artificial Intelligence’s (AAAI) annual video competition.

The video depicts Mario’s newfound ability to learn from his surroundings and experiences, respond to questions in English and German and automatically react to “feelings.”

If Mario is hungry, for example, he collects coins. “When he’s curious he will explore his environment and autonomously gather knowledge about items he doesn’t know much about,” the video’s narrator explains.

The video also demonstrates Mario’s ability to learn from experience. When asked “What do you know about Goomba” — that’s Mario’s longtime enemy in the Super Mario series — Mario first responds “I do not know anything about it.”

But after Mario, responding to a voice command, jumps on Goomba and kills it, he is asked the question again. This time, he responds “If I jump on Goomba then it maybe dies.”

Source: Mashable


Why It’s Hot

This showcases a fun use of Artificial Intelligence, which typically is a little scary. This could have implications for expanded use and trust of AI, but for now it’s all in good fun and good tech.


Soothing robot in the doctor’s office

Going to the doctor can be a scary trip for children.  But a robot named MEDI can make the visit a little bit easier and less frightening.  Short for Medicine and Engineering Designing Intelligence, MEDI stays with the child through medical procedures, talking to them in one of 20 languages and offering soothing advice to get them through the visit.

Equipped with multiple cameras, facial recognition technology and the ability to speak directly to the little patients, MEDI is the product of Tanya Beran, a professor of community health sciences at the University of Calgary in Alberta.  Her team began developing MEDI three years ago and conducted a study of 57 children.  According to Yahoo Tech, “Each was randomly assigned a vaccination session with a nurse, who used the same standard procedures to dispense the medication. In some of those sessions, MEDi used cognitive-behavioral strategies to assuage the children as they got the shot. Afterward, children, parents, and nurses filled out surveys to estimate the pain and distress of the whole shebang.”

The result was that the kids who had MEDI by their side during the procedure reported less pain. Since that study, MEDI is being programmed for more serious procedures, such as chemotherapy to blood transfers to surgery.

Why it’s hot

Robotic technology is starting to come together with practical applications for people.  With motion, voice, the ability to recognize humans and interact with logical language patterns, MEDI is a natural step along the way to fully interactive robots, possibly even artificial intelligence.

Viv – The Global Brain

When Siri debuted in 2011, she was groundbreaking. Suddenly, each shiny new iPhone came with a virtual assistant, there to answer questions, take orders or just chat.

Siri’s limitations, however, were quickly revealed. While she could respond to direct one sentence requests (Call Sarah’s home phone) or answer simple questions (What time is it in California?), even seemingly straightforward demands (Locate the nearest Pinkberry) tripped her up. Soon, she became most useful as party fodder, passed around so guests could laugh at her programmed answers to philosophical questions.

In an attempt to do this Cheyer and fellow Siri co-founder Dag Kittlaus, along with Chris Brigham (an early hire for the Siri team), are developing a new digital assistant that can handle complicated requests, using a crowdsourced approach. Instead of developing the system inside Apple, however, the group has broken out on its own to found the startup Viv Labs.

As hinted above, the central difference between Siri and Viv Labs’ AI system (appropriately named Viv) is that Siri’s responses are pre-programmed, while Viv is designed to learn as it goes, collecting an ever-expanding database of knowledge and abilities. The more people use Viv, the smarter it gets. (It’s kind of like the Waze of personal-assistant apps.)

Wired reported that Viv can already tackle complex requests, ones that would stump both Siri and Google Now (Google’s artificial intelligence, or AI, system): “You can [ask Google Now], ‘What is the population?’ of a city and it’ll bring up a chart and answer,” Kittlaus told the outlet. “But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?'”

Website Here


Why It’s Hot

Detective Spooner would be going crazy right now because of this. The adaptive learning that Viv has could be ground breaking for consumer goods. We always hear about AI and what the capabilities are, but it always feels out of reach or too advanced. Here we have the opportunity for a new technological advancement to be implemented into one of the most common products in the world. Could this type of technology help in any experiencial environments created by brands?


Humin, the App That Wants to Take Over Your Phone

The new Humin app wants to do for the phone what WhatsApp did for messaging, or what Mailbox did for email. In short, Humin wants to be your new phone. The app combines your contacts, dialing and voicemail into one app, and uses contextual information to predict who your most important contacts are, and who you are most likely to want to connect with at any given time. Humin is essentially a re-imagined address book for your smartphone that aims to enrich human connections by presenting relevant information at the perfect time. Open Humin in Houston and you’ll see different information than if you were in Chicago.

The more you use the app, the more it learns from you, and the better its predictions will become.


Why it’s hot:

Humin contextualizes the relationships you have with your contacts by displaying personal details about each individual, and organizing contacts based on their relationship to you, instead of alphabetically.  It is part of a growing trend of apps and services attempting to use context and anticipation to better serve users. For example, Google Now will give you the best route home for your daily commute just as you’re leaving your office. has a patent for “anticipatory shipping,” in which the delivery process is initiated before actual sales. Keep an eye out for continued innovation, as Humin has filed for artificial intelligence and machine learning patents that contribute to how it ranks human relationships.

California Ready For Driverless Cars

You need a license to drive a car. But does a robot?

For now, yes.

Come September, the California Department of Motor Vehicles will begin granting licenses to select driverless cars and their human co-pilots, which will make it a bit less legally iffy as to whether or not they’re actually allowed to be on a public road.

The good news: The license will only cost $150 a pop, and that covers 10 vehicles and up to 20 test drivers.

The bad (but probably actually good) news: You probably can’t get one, so don’t go trying to make your own Googlecar just yet.

The terms of the license are (as you might hope, in these early days) pretty strict.



Why It’s Hot

Technology is encompassing the simple things in life such as daily transportation. What can this type of technology and intelligence lead to? There has to be a way for experiencial marketing to take advantage of these new advancements its really cool ways. Brands like Amazon and Dominos are already playing with the drone technology, now what about this more familiar integration and how it can be used? There are far more cars in peoples lives than drones.

See the article here.