Alexa sends Spotify listeners Nars samples

Spotify teamed up with cosmetics brand Nars and Dentsu Aegis Network agencies The Story Lab and Vizeum on a voice-activated ad campaign.

The test is a response to the changes in how people shopped for beauty products during the coronavirus pandemic, and it enables shoppers in the U.K. to get blush, lipstick or mascara samples delivered straight to their doors by interacting with a smart speaker.

Nars enlisted the help of voice-activated sampling company Send Me a Sample to enable Spotify listeners to request samples via Alexa or Google Assistant, while The Story Lab and Nars worked with Spotify to deliver ads specifically via smart speakers, encouraging listeners to say, “Ask Send Me a Sample for Nars.”

The campaign started this week and will run for eight weeks.

Spotify/Nars

Spotify U.K. head of sales Rakesh Patel said in a statement, “We’re thrilled to be partnering with Nars and The Story Lab to deliver this innovative voice-activated ad campaign. At Spotify, we know there is huge potential within audio for advertisers, and it’s fantastic that Nars is utilizing the Spotify platform in a new way to get its products into the hands of our shared audiences. We see voice as a huge growth area within the industry, and we’re excited to be able to deliver screen-less advertising solutions for brands.”

The Story Labs senior partnership manager Hannah Scott added, “During the current climate, we have had to adapt our way of engaging with our audience. Delivering samples directly to consumers’ doors is a great workaround and something we hope can add a bit of delight during these times, as the user has a blush, lipstick or mascara sample to choose from. Given that people in lockdown are tuning into their smart speakers more than ever, collaborating with Spotify was the perfect fit.”

Why it’s hot: As smart-speaker usage increases and advertisers continue to pivot to direct-response options during the pandemic, the benefit in interactive audio ads is worth exploring. With most users spending more time than ever at home, smart speakers have seen increased usage. While voice-activated campaigns are not new, the success of this and others like it could give advertisers another performance-driven ad option.
This partnership highlights one important difference between advertising on smart speakers versus advertising on other digital audio platforms — the opportunity to interact with an ad. Opportunities for measurable engagement with interactive audio ads like this may help Spotify and other music streaming companies capitalize on the trend of marketers shifting spend to more performance-driven formats as a result of the broader economic downturn.

Sources: Adweek, eMarketer email briefing

Your Google Home / Alexa could spy on you

By now, the privacy threats posed by Amazon Alexa and Google Home are common knowledge. Workers for both companies routinely listen to audio of users—recordings of which can be kept forever—and the sounds the devices capture can be used in criminal trials.

Now, there’s a new concern: malicious apps developed by third parties and hosted by Amazon or Google. The threat isn’t just theoretical. Whitehat hackers at Germany’s Security Research Labs developed eight apps—four Alexa “skills” and four Google Home “actions”—that all passed Amazon or Google security-vetting processes. The skills or actions posed as simple apps for checking horoscopes, with the exception of one, which masqueraded as a random-number generator. Behind the scenes, these “smart spies,” as the researchers call them, surreptitiously eavesdropped on users and phished for their passwords.

The malicious apps had different names and slightly different ways of working, but they all followed similar flows. A user would say a phrase such as: “Hey Alexa, ask My Lucky Horoscope to give me the horoscope for Taurus” or “OK Google, ask My Lucky Horoscope to give me the horoscope for Taurus.” The eavesdropping apps responded with the requested information while the phishing apps gave a fake error message. Then the apps gave the impression they were no longer running when they, in fact, silently waited for the next phase of the attack.

SRLabs eventually took down all four apps demoed. As with most skills and actions, users didn’t need to download anything. Simply saying the proper phrases into a device was enough for the apps to run.

There’s little or no evidence third-party apps are actively threatening Alexa and Google Home users now, but the SRLabs research suggests that possibility is by no means far-fetched.

 Why it’s Hot:
This is potentially very, very scary. With all of the backlash around Facebook, it seems inevitable that voice devices will soon face similar scrutiny. What safety measures will they take to ensure this never happens in real life?

big g hacks alexa…


Voice shopping is increasingly becoming mainstream – by next year, it will eclipse $40 billion. And when shopping using Alexa, 85% of people go with its recommendation for products. So, Honey Nut Cheerios used Amazon Prime day to become the #1 cereal brand on Amazon, and the “cereal” default for millions of customers (80% of whom were new to the brand). They offered free Honey Nut Cheerios to anyone who spent over $40 on Amazon Pantry (as well as a $10 discount on their cart), automatically making Honey Nut Cheerios part of peoples’ order history, thus making them the default for those people who might say “order cereal” in the future.

Why it’s hot:

1) It’s hot: Honey Nut Cheerios is getting in on the ground floor. Before voice shopping truly becomes commonplace behavior, they’re powerfully establishing themselves as the default choice and #1 grocery item on Amazon Pantry.

2) It’s not: It feels a bit too aggressive. People choosing Honey Nut Cheerios when they were offered for free (with a $10 cart discount to boot) doesn’t mean they want them in the future. Should brands be placing themselves not just in the consideration set (as a recommendation), but solidifying themselves as the default for transacting?

[Source]

dragon drive: jarvis for your car…

The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.

According to Digital Trends

“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”

Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.

Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).

Why It’s Hot:

Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.

[More info]

New, cutting-edge technology lets you… call a website on your phone.

Ok, so maybe it is not on the forefront of new technology, but artist Marc Horowitz’s new website makes wonderful use of existing and familiar technology to bring the experience of a guided museum tour into a new light.

A conceptual artist, Horowitz felt his work needed additional context to be fully appreciated, but did not want to go the traditional route of adding lots of text or creating a video for his portfolio. Instead, created an experience that is part audio tour, part podcast, and part interactive website.

At first glance, HAWRAF’s design looks like a pretty standard portfolio. There are tabs at the top, with images below that represent 32 projects dating all the way back to 2001. But the designers, inspired by the audio tours you’ve probably experienced at a museum or gallery, added another element of interaction. In big block text at the top of the website, it says, “Call 1-833-MAR-CIVE.” When you do, you can hear the artist himself tell you stories about each project by simply dialing the reference number below each image.

As an added bonus, users can choose to read the descriptions rather than dial in, making the experience not only unique, but also accessible for the hearing-impaired.

Why it’s hot

As brands and agencies scramble to adopt bleeding edge technology and embrace the latest trends, it’s worth remembering that existing tools and technology can still be harnessed in interesting and new ways. Fitting the experience to the needs of the brand and the user will always result in a more useful and lasting experience than something ill-suited but fashionable

Learn more at 1833marcive.com or on fastcodesign.com

Screaming for help, just went high-tech

Every year, there are 44,000 accidents causing injuries and in only 10% of cases do the emergency services reach the scene in time. This lateness or non-arrival of first aid leads to 14,000 deaths annually.

So life insurance brand AIA decided to harness the country’s 35 million smartphone devices to enable people to get help faster. Open Aiya, created my Happiness FCB Saigon, is a mobile app that allows people to alert their contacts about an accident even if they can’t reach their phone.

When a user says, ‘Hey Siri, Open AIYA’ the voice activated panic system automatically sends an SMS to family, friends and the emergency services. The message contains the person’s precise GPS location so they are easier to assist.

Why It’s Hot:

-Yet another example of brands finding a pain point that aligns with their business model, and solving it through innovative tech….and develop a first of it’s kind, at that (the first voice-activated panic system)

repeat after me…

A Canadian company called Lyrebird has created a way to replicate anyone’s voice using AI. After capturing 60 seconds of anyone talking, the machine can reproduce an individual’s way of speaking. They say they’ve already received thousands of ideas on how people could use this new capability:

Some companies, for example, are interested in letting their users choose to have audio books read in the voice of either famous people or family members. The same is true of medical companies, which could allow people with voice disabilities to train their synthetic voices to sound like themselves, if recorded samples of their speaking voices exist. Another interesting idea is for video game companies to offer the ability for in-game characters to speak with the voice of the human player.

 

But even bigger, they say their technology will allow people to create a unique voice of their own, with the ability to fully control even the emotion with which it speaks.

Why it’s hot

Besides the fact that it’s another example of life imitating art, we already live in a world where we have quite a bit of control over how we portray ourselves to the world. In the future, could we choose our own voice? Could we have different voices for every situation? How might we ever really be sure we know who we’re speaking to? Does the way someone has chosen to sound change the way we get to know them? And, what if the voices of our friends and family can now be preserved in perpetuity?