“Alexa, am I having a heart attack?”

Almost 500,000 Americans die each year from cardiac arrest, but now an unlikely new tool may help cut that number. Researchers at the University of Washington have figured out how to turn a smart speaker into a cardiac monitoring system. That’s right, in the not-too-distant future you may be able to ask Siri if you’re having a heart attack—even if you’re not touching the device.

Because smart speakers are always passively listening, anticipating being called into action with a “Hey Google” or “Alexa!” they are the perfect device for listening for changes in breathing. So if someone starts gasping and making so-called “agonal breathing” (add that to your Scrabble repertoire) the smart speaker can call for help. Agonal breathing is described by co-author Dr. Jacob Sunshine as “a sort of a guttural gasping noise” that is so unique to cardiac arrest that it makes “a good audio biomarker.” According to a press release, about 50% of people who experience cardiac arrest have agonal breathing and since Alexa and Google are always listening, they can be taught to monitor for its distinctive sound.

On average, the proof-of-concept tool detected agonal breathing events 97% of the time from up to 20 feet (or 6 meters) away. The findings were published today in npj Digital Medicine. Why is it so good at detecting agonal breathing? Because the team created it using a dataset of agonal breathing captured from real 911 calls.

“A lot of people have smart speakers in their homes, and these devices have amazing capabilities that we can take advantage of,” said co-author Shyam Gollakota. “We envision a contactless system that works by continuously and passively monitoring the bedroom for an agonal breathing event, and alerts anyone nearby to come provide CPR. And then if there’s no response, the device can automatically call 911.”

Why It’s Hot

Despite the rather creepy notion that Amazon is always listening, this innovation is rather cool. What other kinds of health issues could this predict? As a parent, having a speaker able to predict whether a cough is run-of-the-mill or of the scary croup variety would be invaluable. For health events that need an aural translation, this is one application in the right direction.

Source:Fast Company

repeat after me…

A Canadian company called Lyrebird has created a way to replicate anyone’s voice using AI. After capturing 60 seconds of anyone talking, the machine can reproduce an individual’s way of speaking. They say they’ve already received thousands of ideas on how people could use this new capability:

Some companies, for example, are interested in letting their users choose to have audio books read in the voice of either famous people or family members. The same is true of medical companies, which could allow people with voice disabilities to train their synthetic voices to sound like themselves, if recorded samples of their speaking voices exist. Another interesting idea is for video game companies to offer the ability for in-game characters to speak with the voice of the human player.

 

But even bigger, they say their technology will allow people to create a unique voice of their own, with the ability to fully control even the emotion with which it speaks.

Why it’s hot

Besides the fact that it’s another example of life imitating art, we already live in a world where we have quite a bit of control over how we portray ourselves to the world. In the future, could we choose our own voice? Could we have different voices for every situation? How might we ever really be sure we know who we’re speaking to? Does the way someone has chosen to sound change the way we get to know them? And, what if the voices of our friends and family can now be preserved in perpetuity?

 

Viv – The Global Brain

When Siri debuted in 2011, she was groundbreaking. Suddenly, each shiny new iPhone came with a virtual assistant, there to answer questions, take orders or just chat.

Siri’s limitations, however, were quickly revealed. While she could respond to direct one sentence requests (Call Sarah’s home phone) or answer simple questions (What time is it in California?), even seemingly straightforward demands (Locate the nearest Pinkberry) tripped her up. Soon, she became most useful as party fodder, passed around so guests could laugh at her programmed answers to philosophical questions.

In an attempt to do this Cheyer and fellow Siri co-founder Dag Kittlaus, along with Chris Brigham (an early hire for the Siri team), are developing a new digital assistant that can handle complicated requests, using a crowdsourced approach. Instead of developing the system inside Apple, however, the group has broken out on its own to found the startup Viv Labs.

As hinted above, the central difference between Siri and Viv Labs’ AI system (appropriately named Viv) is that Siri’s responses are pre-programmed, while Viv is designed to learn as it goes, collecting an ever-expanding database of knowledge and abilities. The more people use Viv, the smarter it gets. (It’s kind of like the Waze of personal-assistant apps.)

Wired reported that Viv can already tackle complex requests, ones that would stump both Siri and Google Now (Google’s artificial intelligence, or AI, system): “You can [ask Google Now], ‘What is the population?’ of a city and it’ll bring up a chart and answer,” Kittlaus told the outlet. “But you cannot say, ‘What is the population of the city where Abraham Lincoln was born?'”

Website Here

Viv

Why It’s Hot

Detective Spooner would be going crazy right now because of this. The adaptive learning that Viv has could be ground breaking for consumer goods. We always hear about AI and what the capabilities are, but it always feels out of reach or too advanced. Here we have the opportunity for a new technological advancement to be implemented into one of the most common products in the world. Could this type of technology help in any experiencial environments created by brands?