Amazon crowdsourcing answers to questions posed to Alexa

Crowdsourcing strikes again. Incentivized by the lure of social-capital, users can submit answers to questions posed to Alexa to receive points and status within the network of answer-ers. The public, using the up-and-down vote system will presumably let the best answer float to the top.

Though, “In some cases, human editors as well as algorithms will be involved in quality-control measures,” says Fast Company.

From Fast Company: “Starting today, Amazon is publicly launching a program called Alexa Answers, which lets anyone field questions asked by users for which Alexa doesn’t already have a response—ones such as:

  • What states surround Illinois?
  • What’s the proper amount of sleep?
  • How many instruments does Stevie Wonder play?
  • How much is in a handle of alcohol?

From then on, when people ask a question, Alexa will speak an answer generated through Alexa Answers, noting that the information is ‘according to an Amazon customer.'”

Why it’s hot:

Will value-based questions be answerable? If so, owning the answer to ‘what’s the best burger in Brooklyn?’ would be very lucrative.

Can brands leverage this tech to their advantage? Either by somehow “hacking” this system in playful way, or by replicating such an answer system with their own user base, to plug into an Alexa skill?

On a broader level:

How much do we trust the crowd? Recent history has left many questioning the validity of “the wisdom of the people”.

Civil society runs on a foundation of shared understandings about the world. If we trust answers about our reality to come from the crowd, how will bad actors use such a system to undermine our shared understanding or subtly sway public knowledge to support their agenda? Alexa, does life start at conception?

“Alexa, Open Reebok Sneaker Drop”

Reebok is giving away limited-edition “Club C” sneakers as part of their campaign with Cardi B, and the only way to enter to win is via smart speaker. All you have to do is ask Alexa or Google Assistant to “Open Reebok Sneaker Drop” to participate in the giveaway of the Swarovski-crystal encrusted shoes.

Entrants will have to check in with their voice assistants on September 7th between 10 a.m. and 12 p.m. to see if they’ve won. The command “Ask Reebok Sneaker Drop if I won” and saying the passcode “Get my Club C’s” is the final step of the process to find out if they are one of the 50 winners or 150 runners-up.

Why It’s Hot

Limited-quantity product drops are key in sneaker culture. Adding voice assistant technology even further appeals to the exclusivity and excitement of trying to secure a coveted pair of shoes.

Source

big g hacks alexa…


Voice shopping is increasingly becoming mainstream – by next year, it will eclipse $40 billion. And when shopping using Alexa, 85% of people go with its recommendation for products. So, Honey Nut Cheerios used Amazon Prime day to become the #1 cereal brand on Amazon, and the “cereal” default for millions of customers (80% of whom were new to the brand). They offered free Honey Nut Cheerios to anyone who spent over $40 on Amazon Pantry (as well as a $10 discount on their cart), automatically making Honey Nut Cheerios part of peoples’ order history, thus making them the default for those people who might say “order cereal” in the future.

Why it’s hot:

1) It’s hot: Honey Nut Cheerios is getting in on the ground floor. Before voice shopping truly becomes commonplace behavior, they’re powerfully establishing themselves as the default choice and #1 grocery item on Amazon Pantry.

2) It’s not: It feels a bit too aggressive. People choosing Honey Nut Cheerios when they were offered for free (with a $10 cart discount to boot) doesn’t mean they want them in the future. Should brands be placing themselves not just in the consideration set (as a recommendation), but solidifying themselves as the default for transacting?

[Source]

From smart homes to smart offices: Meet Alexa for Business

During AWS Reinvent Conference in Las Vegas, Amazon announced Alexa for Business Platform, along with a set of initial partners that have developed specific “skills” for business customers.

Their main goal seems to be aimed at making Alexa a key component to office workers:

 
– The first focus for Alexa for Business is the conference room. AWS is working with the likes of Polycom and other video and audio conferencing providers to enable this.

– Other partners are Microsoft ( to enable better support for its suite of productivity services) Concur (travel expenses) and Splunk ( big data generated by your technology infrastructure, security systems, and business applications), Capital One and Wework. 

But that’s just what they are planning to offer and the new platform will also let companies build out their own skills and integrations.

Why It’s hot: 
We are finally seeing those technologies give a step to being actually useful and mainstream. 
Since Amazon wants to integrate Alexa to other platforms, It can be an interesting tool for future innovations. 
Source: TechCrunch

Today, To die. Google home can recognize multiple voices

Google Home can now be trained to identify the different voices of people you live with. Today Google announced that its smart speaker can support up to six different accounts on the same device. The addition of multi-user support means that Google Home will now tailor its answers for each person and know which account to pull data from based on their voice. No more hearing someone else’s calendar appointments.

So how does it work? When you connect your account on a Google Home, we ask you to say the phrases “Ok Google” and “Hey Google” two times each. Those phrases are then analyzed by a neural network, which can detect certain characteristics of a person’s voice. From that point on, any time you say “Ok Google” or “Hey Google” to your Google Home, the neural network will compare the sound of your voice to its previous analysis so it can understand if it’s you speaking or not. This comparison takes place only on your device, in a matter of milliseconds.

Why it’s hot?
-Everyone in the family gets a personal assistant.
-Imagine how it might work in a small business / office
-Once it starts recognizing more than six voices, can every department have its own AI assistant?

Voice Recognition Software Translates Words from Those With Speech Disorders

The Phillips Innovation Fellows Competition invites makers and inventors interested in health and well-being to prove their ideas in the testing ground of crowdfunding, then picks one from those successful to back with prize money intended to accelerate bringing an innovation to market. This year’s winner is Talkitt.

Talkitt is a voice recognition software that translates what people with speech disorders mean and turns it into sounds that voice-to-text applications (and people not used to listening) can understand. It works much like any voice-to-text program, by attuning itself to the peculiarities of an individual’s pronunciation and word choice, but is optimized to understand the sounds made by people with challenges in standard pronunciation.

According to statistics from the National Institutes for Health, approximately 7.5 million people in the United States alone suffer some kind of impediment to using their voices. As technology becomes progressively more voice-driven, people with these disabilities become ever more disenfranchised. Talkitt can reverse this trend by not only connecting those individuals more fully to available tech, but also by helping them connect more fully with the people in their lives.

Voiceitt’s Indiegogo campaign raised over $25,000 dollars during the crowdfunding phase, and received a $60,000 prize plus publicity assistance and mentoring from Phillips executives to bring TalkItt to market.

Source: PSFK

Why It’s Hot

Voice recognition and interpretation has been a hot topic in recent weeks and months– from real-time translations (Skype and texting apps) to home entertainment (Xbox) to shopping (The North Face).  So has the topic of using technology to track and improve health. This is an interesting integration of the two and has implications for life improvement.

SemaConnect electric car charging stations get their own Google Glass app

SemaConnect, which makes electric vehicle charging stations, has launched an application on Google Glass to make it easier for drivers to navigate to the closest charging stations at a nearby Walgreens or Dunkin’ Donuts.

The app leverages augmented reality to make navigation faster and easier, with users able to locate the closest charging stations within a 20-mile radius. Users can also enable turn-by-turn navigation to station locations and initiate a charging session.

When a driver gets to the station, then the user says “Control my car” and the station begins charging the vehicle. If there is a fee applicable, it is automatically billed to the user’s credit card.

8e460896da4263aeec2deecb6ebe0090_f116 semaconnect-big-opt

 

Why It’s Hot

While the Google Glass is still in its early days, and people are just getting started in getting and using this device and figuring out its capabilities, electric vehicle owners are most likely early adopters anyways.

The big advantage to using the Google Glass is that the user need not take her hands off the wheel or her eyes off the road. And the app is also driven largely by voice commands.

How The North Face uses voice search to drive mcommerce sales

Outdoor gear and apparel retailer The North Face continues to see strong results from its use of natural language and voice-enabled search, helping its sites across mobile and desktop in several European counties to deliver a 35 percent increase in search conversion rate and 24 percent increase in revenue from search.

EasyAsk has been deployed across 11 sites in nine countries, including Britain, Germany, Netherlands, Sweden, France, Italy, Spain and Austria. As a result, visitors to these sites can use specific terms for what they are looking for in their local language as opposed to using traditional keyword search.

 

north-face-big-opt

Read more here.

Why It’s Hot

Voice-enabled on-site search makes sense on mobile because users are familiar with speaking into their smartphones. The problem is still accuracy–I keep getting “pizza places” recommendations from Siri, whenever I search for Dry Cleaners…

For on-the-go users who may be trying to find something quickly, natural language search means they can quickly and easily find what they are looking for without having to use a general keyword and then have to scroll through a lot of unrelated results.–I get it for public restrooms: bit how urgent is your need for a new “warm winter jacket”?