Inside Amazon’s plan for Alexa to run your entire life

The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.

Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.

A more intelligent Alexa

Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.

But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.

Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.

In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.

This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.

To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.

Why It’s Hot:  “This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”

Adobe debuts latest effort in the misinformation arms race

Adobe has previewed an AI tool that analyzes the pixels of a image to determine the probability that it’s been manipulated and the areas in which it thinks the manipulation has taken place, shown as a heat map.

It’s fitting that the company that made sophisticated photo manipulation possible would also create a tool to help combat its nefarious use. While it’s not live in Adobe applications yet, it could be integrated into them, such that users can quickly know whether what their looking at is “real” or not.

Up next: The inevitable headline of someone creating a tool that can trick the Adobe AI tool into thinking photo is real.

Why it’s hot:

Fake news is a big problem, and this might help us get to the truth of some matters of consequence.

But … not everything can be solved with AI. This might help people convince others that something they saw is in fact fake, but it doesn’t overcome the deeper problem of people’s basic gullibility, lack of critical thinking, and strong desire to justify their already entrenched beliefs.

Source: The Verge

Google Claims a Quantum Breakthrough That Could Change Computing

Google said on Wednesday that it had achieved a long-sought breakthrough called “quantum supremacy,” which could allow new kinds of computers to do calculations at speeds that are inconceivable with today’s technology.

The Silicon Valley giant’s research lab in Santa Barbara, Calif., reached a milestone that scientists had been working toward since the 1980s: Its quantum computer performed a task that isn’t possible with traditional computers, according to a paper published in the science journal Nature.

A quantum machine could one day drive big advances in areas like artificial intelligence and make even the most powerful supercomputers look like toys. The Google device did in 3 minutes 20 seconds a mathematical calculation that supercomputers could not complete in under 10,000 years, the company said in its paper.

Scientists likened Google’s announcement to the Wright brothers’ first plane flight in 1903 — proof that something is really possible even though it may be years before it can fulfill its potential.

Still, some researchers cautioned against getting too excited about Google’s achievement since so much more work needs to be done before quantum computers can migrate out of the research lab. Right now, a single quantum machine costs millions of dollars to build.

Many of the tech industry’s biggest names, including Microsoft, Intel and IBM as well as Google, are jockeying for a position in quantum computing. And venture capitalists have invested more than $450 million into start-ups exploring the technology, according to a recent study.

China is spending $400 million on a national quantum lab and has filed almost twice as many quantum patents as the United States in recent years. The Trump administration followed suit this year with its own National Quantum Initiative, promising to spend $1.2 billion on quantum research, including computers.

A quantum machine, the result of more than a century’s worth of research into a type of physics called quantum mechanics, operates in a completely different manner from regular computers. It relies on the mind-bending ways some objects act at the subatomic level or when exposed to extreme cold, like the metal chilled to nearly 460 degrees below zero inside Google’s machine.

“We have built a new kind of computer based on some of the unusual capabilities of quantum mechanics,” said John Martinis, who oversaw the team that managed the hardware for Google’s quantum supremacy experiment. Noting the computational power, he added, “We are now at the stage of trying to make use of that power.”

On Monday, IBM fired a pre-emptive shot with a blog post disputing Google’s claim that its quantum calculation could not be performed by a traditional computer. The calculation, IBM argued, could theoretically be run on a current computer in less than two and a half days — not 10,000 years.

“This is not about final and absolute dominance over classical computers,” said Dario Gil, who heads the IBM research lab in Yorktown Heights, N.Y., where the company is building its own quantum computers.

Other researchers dismissed the milestone because the calculation was notably esoteric. It generated random numbers using a quantum experiment that can’t necessarily be applied to other things.

As its paper was published, Google responded to IBM’s claims that its quantum calculation could be performed on a classical computer. “We’ve already peeled away from classical computers, onto a totally different trajectory,” a Google spokesman said in a statement. “We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have.”

Source: NY Times

Why It’s Hot

It’s hard to even fathom what possibilities this opens, but it seems application is still a while away.

Your phone’s camera didn’t capture the moment. It computed it.

The way our cameras process and represent images is changing in a subtle but fundamental way, shifting cameras from ‘capturing the moment’ to creating it with algorithmic computations.

Reporting about the camera on Google’s new Pixel 4 smartphone, Brian Chen of the New York Times writes:

“When you take a digital photo, you’re not actually shooting a photo anymore.

‘Most photos you take these days are not a photo where you click the photo and get one shot,’ said Ren Ng, a computer science professor at the University of California, Berkeley. ‘These days it takes a burst of images and computes all of that data into a final photograph.’

Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.

Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.”

This technology is evident in Google’s Night Sight, which is capable of capturing low-light photos without a flash.

Why it’s hot: 

In a world where the veracity of photographs and videos is coming into question because of digital manipulation, it’s interesting that alteration is now baked in.

Immortalized in Film…? Not so fast.

Tencent Shows The Future Of Ads; Will Add Ads In Existing Movies, TV Shows

One of China’s largest online video platforms is setting out to use technology to integrate branded content into movies and TV shows from any place or era.

(Yes, a Starbucks on Tatooine…or Nike branded footwear for the first moonwalk.)

Why It’s Hot:  

  1. Potentially exponential expansion of available ad inventory
  2. Increased targetability by interest, plus top-spin of borrowed interest
  3. Additional revenue streams for content makers
  4. New questions of the sanctity of creative vision, narrative intent and historical truth

Advertising is an integral part of any business and with increasing competition, it’s more important than ever to be visible. Mirriad, a computer-vision and AI-powered platform company, recently announced its partnership with Tencent which is about the change the advertising game. If you didn’t know, Tencent is one of the largest online video platforms in China. So how does it change the advertising game, you ask?

Mirriad’s technology enables advertisers to reach their target audience by integrating branded content (or ads) directly into movies and TV series. So, for instance, if an actor is holding just a regular cup of joe in a movie, this new API will enable Tencent to change that cup of coffee into a branded cup of coffee. Matthew Brennan, a speaker and a writer who specialises in analysing Tencent & WeChat shared a glimpse of how this tech works.

While we’re not sure if these ads will be clickable, it’ll still have a significant subconscious impact, if not direct. Marketers have long talked of mood marketing that builds a personal connection between the brand and the targeted user. So, with the ability to insert ads in crucial scenes and moments, advertisers will now be able to engage with their target users in a way that wasn’t possible before.

Mirriad currently has a 2-year contract with Tencent where they’ll trial exclusively on the latter’s video platform. But if trials are successful in that they don’t offer a jarring viewing experience, we can soon expect this tech to go mainstream.

How We are AI – by NY Times

Would be hard to summarize this in-depth article/expose from NYT, but…

A.I. Is Learning From Humans. Many Humans.

Artificial intelligence is being taught by thousands of office workers around the world. It is not exactly futuristic work.

  • A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.
  • Before an A.I. system can learn, someone has to label the data supplied to it. Humans, for example, must pinpoint the polyps. The work is vital to the creation of artificial intelligence like self-driving carssurveillance systems and automated health care.

  • Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.

  • Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.

    Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.

    One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80 percent of the time spent building A.I. technology.

    This work can be so upsetting to workers, iMerit tries to limit how much of it they see. Pornography and violence are mixed with more innocuous images, and those labeling the grisly images are sequestered in separate rooms to shield other workers, said Liz O’Sullivan, who oversaw data annotation at an A.I. start-up called Clarifai and has worked closely with iMerit on such projects.“I would not be surprised if this causes post-traumatic stress disorder — or worse. It is hard to find a company that is not ethically deplorable that will take this on,” she said. “You have to pad the porn and violence with other work, so the workers don’t have to look at porn, porn, porn, beheading, beheading, beheading

     Source: NYT

Why It’s Hot: All this tech-first talk of AI, this was FASCINATING to me. I did not know this was the reality of “training AI.”

Phone a Friend: a mobile app for predicting teen suicide attempts

Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.

Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.

Why It’s Hot

We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.

Selfies Get Serious: Introducing the 30-second selfie full-fitness checkup

Keeping an eye on subtle changes in common health risks is not an easy task for the average person. Yet, by the time real symptoms are obvious, it’s often too late to take the kind of action that would prevent a problem from snow-balling.

Researchers at the University of Toronto have developed an app that appears capable of turning a 30-second selfie into a diagnostic tool for quantifying a range of health risks.

“Anura promises an impressively thorough physical examination for just half a minute of your time. Simply based on a person’s facial features, captured through the latest deep learning technology, it can assess heart rate, breathing, stress, skin age, vascular age, body mass index (yes, from your face!), Cardiovascular disease, heart attack and stroke risk, cardiac workload, vascular capacity, blood pressure, and more.”

It’s easy to be skeptical about the accuracy of results possible from simply looking at a face for 30 seconds, but the researchers have demonstrated accuracy of measuring blood pressure up to 96% – and when the objective is to give people a way of realizing when it might be time to take action, that level of accuracy may actually be more than enough.

Why It’s Hot

For marketers looking to better identify the times, places and people for whom their products and services are likely to be most relevant, the convergence of biometrics with advanced algorithms and AI – all in a device most people carry around with them every day – could be a game-changer.

(This also brings up perennial issues of privacy & personal information, and trade-offs we need to make for the benefits emerging tech provides.)

The AI Drone Crocodile Hunter

Last summer, Australia began testing drones at their beaches to help spot distressed swimmers – acting as overhead lifeguards. Now the same company that created that technology, Ripper Group, is creating an algorithm for their drones to spot crocodiles.

While not frequent, crocodile attacks have gone up in recent years. And crocodiles are not easily identified when they spend up to 45 minutes under murky water. So the Ripper Group is using machine learning to train drones to distinguish crocodiles from 16 other marine animals, boats, and humans through a large database of images.

The drones also include warning sirens and flotation devices for up to four people, to assist in emergency rescue when danger is spotted.

Why It’s Hot

Lifeguards are limited in what they can see and how quickly they can act. With the assistance of drones, beach goers can stay carefree.

Source

Make getting drunk great again

British data science company DataSparQ has developed facial recognition-based AI technology to prevent entitled bros from cutting the line at bars. This “technology puts customers in an ‘intelligently virtual’ queue, letting bar staff know who really was next” and who’s cutting the line.

“The system works by displaying a live video of everyone queuing on a screen above the bar. A number appears above each customer’s head — which represents their place in the queue — and gives them an estimated wait time until they get served. Bar staff will know exactly who’s next, helping bars and pubs to maximise their ordering efficiency and to keep the drinks flowing.”

Story on Endgadet

Why it’s Hot

Using AI to help solve these types of trifling irritations is better than having to tolerate other people’s sense of entitlement, though it also highlights the need to police rude behavior through something other than raising your kids well.

Retail wants a Minority Report for returns

In what now seems inevitable, an online fashion retailer in India owned by an e-commerce startup that’s backed by Walmart is doing research with Deep Neural Networks to predict which items a buyer will return before they buy the item.

With this knowledge, they’ll be better able to predict their returns costs, but more interestingly, they’ll be able to incentivize shoppers to NOT return as much, using both loss and gain offers related to items in one’s cart.

The nuts and bolts of it is: the AI will assign a score to you based on what it determines your risk of returning a specific item to be.This data could be from your returns history, as well as less obvious data points, such as your search/shopping patterns elsewhere online, your credit score, and predictions about your size and fit based on aggregated data on other people.

Then it will treat you differently based on that assessment. If you’re put in a high risk category, you may pay more for shipping, or you may be offered a discount in order to accept a no-returns policy tailored just for you. It’s like car insurance for those under 25, but on hyper-drive. If you fit a certain demo, you may start paying more for everything.

Preliminary tests have shown promise in reducing return rates.

So many questions:

Is this a good idea from a brand perspective? If this becomes a trend, will retailers with cheap capital that can afford high-returns volume smear this practice as a way to gain market share?

Will this drive more people to better protect their data and “hide” themselves online? We might be OK with being fed targeted ads based on our data, but what happens when your data footprint and demo makes that jacket you wanted cost more?

Will this encourage more people to shop at brick and mortar stores to sidestep retail’s big brother? Or will brick and mortar stores find a way to follow suit?

How much might this information flow back up the supply chain, to product design, even?

Why it’s hot

Returns are expensive for retailers. They’re also bad for the environment, as many returns are just sent to the landfill, not to mention the carbon emissions from sending it back.

So, many retailers are scrambling to find the balance between reducing friction in the buying process by offering easy returns, on the one hand, and reducing the amount of actual returns, on the other.

There’s been talk of Amazon using predictive models to ship you stuff without you ever “buying” it. You return what you don’t want and it eventually learns what you want to the point where you just receive a box of stuff at intervals, and money is extracted from your bank account. This also might reduce fossil fuels.

How precise can these predictive models get? And how might people be able to thwart them? Is there a non-dystopian way to reduce returns?

Source: ZDNet

A monkey has been able to control a computer with his brain


Neuralink graphic
N1 sensor.
The N1 array in action.

Neuralink, the Elon Musk-led startup that the multi-entrepreneur founded in 2017, is working on technology that’s based around “threads,” which it says can be implanted in human brains with much less potential impact to the surrounding brain tissue versus what’s currently used for today’s brain-computer interfaces. “Most people don’t realize, we can solve that with a chip,” Musk said to kick off Neuralink’s event, talking about some of the brain disorders and issues the company hopes to solve.

Musk also said that, long-term, Neuralink really is about figuring out a way to “achieve a sort of symbiosis with artificial intelligence.” He went on to say, “This is not a mandatory thing. This is something you can choose to have if you want.”

For now, however, the aim is medical, and the plan is to use a robot that Neuralink has created that operates somewhat like a “sewing machine” to implant this threads, which are incredibly thin (like, between 4 and 6 μm, which means about one-third the diameter of the thinnest human hair), deep within a person’s brain tissue, where it will be capable of performing both read and write operations at very high data volume.

These probes are incredibly fine, and far too small to insert by human hand. Neuralink has developed a robot that can stitch the probes in through an incision. It’s initially cut to two millimeters, then dilated to eight millimeters, placed in and then glued shut. The surgery can take less than an hour.

No wires poking out of your head
It uses an iPhone app to interface with the neural link, using a simple interface to train people how to use the link. It basically bluetooths to your phone,” Musk said.

Is there going to be a brain app store ? Will we have ads in our brain?
“Conceivably there could be some kind of app store thing in the future,” Musk said. While ads on phones are mildly annoying, ads in the brain could be a disaster waiting to happen.

Why it’s hot?
A.I.: you won’t be able to beat it, so join it
Interfacing our brains with machines may save us from an artificial intelligence doomsday scenario. According to Elon Musk, if we want to avoid becoming the equivalent of primates in an AI-dominated world, connecting our minds to computing capabilities is a solution that needs to be explored.

“This is going to sound pretty weird, but [we want to] achieve a symbiosis with artificial intelligence,” Musk said. “This is not a mandatory thing! This is a thing that you can choose to have if you want. I think this is going to be something really important at a civilization-scale level. I’ve said a lot about A.I. over the years, but I think even in a benign A.I. scenario we will be left behind.”

Think about the kind of “straight from the brain data” we would have at our disposal and how will we use it?

 

 

Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Amazon is rolling out StyleSnap, its AI-enabled shopping feature that helps you shop from a photograph or snapshot. Consumers upload images to the Amazon app and it considers factors like brand, price and reviews to recommend similar items.

Amazon has been able to leverage data from brands sold on its site to develop products that are good enough or close enough to the originals, usually at lower price points, and thereby gain an edge, but its still only a destination for basics like T-shirts and socks. With StyleSnap, Amazon is hoping to further crack the online retailing sector with this new offering.

Why It’s Hot

Snapping and sharing is already part of retail culture, and now Amazon is creating a simple and seamless way of adding the shop and purchase to this ubiquitous habit.  The combination of AI and user reviews in its algorithm could change the way we shop when recommendations aren’t only based on the look of an item, but also on how customers experience it.

 

Source: Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Other sources: https://www.cnet.com/news/amazon-stylesnap-uses-ai-to-help-you-shop-for-clothes/

Applying AI for Social Good

By Ankita Pamnani

Interest in Artificial Intelligence (AI) has dramatically increased in recent years and AI has been successfully applied to societal challenge problems. It has a great potential to provide tremendous social good in the future.

Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts.

AI has a broad potential across a range of social domains.

  • Education
    • These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.
  • Public and Social Sector
  • Economic Empowerment
    • With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.
  • Environment
    • Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain.

Some of the issues that we are currently facing with social data

  • Data needed for social-impact uses may not be easily accessible
    • Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals).
    • The expert AI talent needed to develop and train AI models is in short supply
      • The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.
    • ‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good
      • Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios.

 

McDonald’s Personalizes the Drive-Thru Menu

Next time you pull up to a McDonald’s drive-thru, you might see exactly what you’re craving front and center. Menus will be personalized based on factors like weather, local events, restaurant traffic, and trending items.

This new technology will be powered by their acquisition of personalization company Dynamic Yield. The menu can be programmed against triggers with scenarios such as offering ice cream and iced coffee when the temperature rises above 80 degrees, or pushing hot chocolate when it starts to rain.

Once a person starts ordering, the menu will offer add-ons based on the previous selections made. For example, a person ordering a salad may be offered a smoothie instead of fries.

Why It’s Hot

McDonald’s already builds off of customer’s cravings. Now that these cravings can be predicted, personalized, and optimized over time, there’s a high likelihood that customers will be ordering more at the drive-thru window.

DeepMind? Pffft! More like “dumb as a bag of rocks.”

Google’s DeepMind AI project, self-described as “the world leader in artificial intelligence research” was recently tested against the type of math test that 16 year olds take in the UK. The result? It only scored a 14 out of 40 correct. Womp womp!

“The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it.” (Medium)

Image result for home d'oh

Why It’s Hot

There is no shortage of angst by humans worried about losing their jobs to AI. Instead of feeling a reprieve, humans should take this as a sign that AI might just be best designed to complement human judgements and not to replace them.

AI Voice Assistant

Such tasks, historically performed by a personal assistant or secretary, include taking dictation, reading text or email messages aloud, looking up phone numbers, scheduling, placing phone calls and reminding the end user about appointments. Popular virtual assistants currently include Amazon Alexa, Apple’s SiriGoogle Now and Microsoft’s Cortana — the digital assistant built into Windows Phone 8.1 and Windows 10.

Why it’s hot:

  • Intelligent Personal Assistant: This is software that can assist people with basic tasks, usually using natural language. Intelligent personal assistants can go online and search for an answer to a user’s question. Either text or voice can trigger an action.

  • Smart Assistant: This term usually refers to the types of physical items that can provide various services by using smart speakers that listen for a wake word to become active and perform certain tasks. Amazon’s Echo, Google’s Home, and Apple’s HomePod are types of smart assistants.

  • Virtual Digital Assistants: These are automated software applications or platforms that assist the user by understanding natural language in either written or spoken form.

  • Voice Assistant: The key here is voice. A voice assistant is a digital assistant that uses voice recognition, speech synthesis, and natural language processing (NLP) to provide a service through a particular application.

Tractica is a market intelligence firm that focuses on human interaction with technology. Their reports say unique consumer users for virtual digital assistants will grow from more than 390 million worldwide users in 2015 to 1.8 billion by the end of 2021. The growth in the business world is expected to increase from 155 million users in 2015 to 843 million by 2021. With that kind of projected growth, revenue is forecasted to grow from $1.6 billion in 2015 to $15.8 billion in 2021.

At Unilever, Resumes are Out – Algorithms are In

The traditional hiring process for companies, especially large organizations, can be exhaustive and often ineffective, with 83% of candidates rating their experience as “poor” and 30-50% of candidates chosen by the company end up failing.

Unilever recruits more than 30,000 people a year and processes around 1.8 million job applications. As you can imagine, this takes a tremendous amount of time and resources and too often talented candidates are overlooked just because they’re buried at the bottom of a pile of CVs. To tackle this problem, Unilever partnered with Pymetrics, an online platform on a mission to make the recruiting process more predictive and less biased than traditional methods.

Candidates start the interview process by accessing the platform at home from a computer or mobile-screen, and playing a selection of games that test their aptitude, logic and reasoning, and appetite for risk. Machine learning algorithms are then used to assess their suitability for whatever role they have applied for, by matching their profiles against those of previously successful employees.

The second stage of the process involves submitting a video interview that is reviewed not by a human, but a machine learning algorithm. The algorithm examines the videos of candidates who answer various questions, and through a mixture of natural language processing and body language analysis, determines who is likely to be a good fit.

One of the most nerve-wracking aspects of the job interview process can be anticipation of the feedback loop, or lack thereof – around 45% of job candidates claim they never hear back from a prospective employer. But with the AI-powered platform, all applicants get a couple of pages of feedback, including how they did in the game, how they did in the video interviews, what characteristics they have that fit, and if they don’t fit, the reason why they didn’t, and what they believe they should do to be successful in a future application.


Why it’s hot:
  Making experiences, even hiring experiences, feel more human with AI – The existing hiring process can leave candidates feeling confused, abandoned, and disadvantaged. Using AI and deep analysis helps hiring managers see candidates for who they are, outside of their age, gender, race, education, and socioeconomic status. Companies like Unilever aren’t just reducing their recruiting costs and time to hire- they’re setting an industry precedent that a candidate’s potential to succeed in the future doesn’t lie in who they know, where they came from or how they appear on paper.[Source: Pymetrics]

Breaking the Bias One Translation at a Time

The words we use daily can directly affect our perception and the way we think. For example, the effect of gender bias on language can influence how both women and men see certain professions. The terms cameraman, fireman and policeman, for example, are perceived as more masculine, while words like midwife are more stereotypically feminine.

Source: https://www.contagious.io/articles/what-do-you-mean

Released on International Women’s Day 2019, ElaN Languages has created The Unbias Button, to translate biased words, such as job titles, into gender-neutral ones.

Why it’s hot: This is a subtle way to change our awareness of the words we use on a daily basis.

Going Paperless in a Brick and Mortar

Lush is known for its colorful soaps and bath bombs, but the brand has consistently prioritized going green above all else—and its very first SXSW activation was no exception.

The brand set up its bath bomb pop-up to showcase its 54 new bath bomb creations using absolutely no signage. Instead, attendees could download the Lush Labs app, which uses AI and machine learning to determine what each bath bomb is with just a quick snapshot. “At Lush, we care about sustainability, and we wanted to take that same lens … and apply it to the way we are using technology,” Charlotte Nisbet, global concept lead at Lush, told Adweek.

Nisbet explained that three decades ago, Lush co-founder Mo Constantine invented the bath bomb when brainstorming a packaging-free alternative to bubble bath. (The new bath bombs are being released globally on March 29 in celebration of 30 years since Constantine created the first bath bomb in her garden shed in England.)

“But we were still facing the barrier to being even more environmentally friendly with packaging and signage in our shops,” Nisbet said.

Enter the Lush Lens feature on the Lush Labs app, which lets consumers scan a product with their phone to see all the key information they’d need before making a purchase: price, ingredients and even videos of what the bath bomb looks like when submerged in water. “This means that not only can we avoid printing signage that will eventually need to be replaced, but also that customers can get information on their products anytime while at home,” Nisbet said.

Why It’s Hot

The application sounds cool but is this a sustainable direction for more stores to take? As brick and mortar stores continue to struggle, we could see many start to experiment with ways to bring digital experiences to consumers already plugged into their smartphones in retail spaces.

Source: Adweek

Woebot – Highly Praised App for Mental Health

AI counseling is the wave of the future. Cognitive Behavioral Therapy administered by a smart chatbot, via an app relying on SMS, has become highly popular and well reviewed. Woebot isn’t just the face of a trend, it’s a notable player in technology transforming healthcare.

Why It’s Hot

It’s not new. It’s better. The first counseling software was called Eliza. It was ~1966. Part of the difficulty was it required human intervention. Ironically, in 2019 when many believe a lack of human contact to be part of the problem, that void actually addresses a barrier in therapy. Perceived lack of anonymity and privacy. Sure therapist visits are confidential blah blah but people naturally have difficulty opening up in person. Plus there’s the waiting room anxiety. With an app, studies have shown that people get to the heart of their problem quicker.

Why it Matters

There’s a ton of demand for “talk therapy” and others. Human counselors can’t keep up. People wait weeks and months for appointments. That’s in the U.S. where they’re compensated well. In this On Demand age, that’s seen as unacceptable. Woebot, and others, address the market need for immediate gratification care. Another issue is cost. Therapy is expensive. Apps are obviously a solve here. No co-pay.

Obligatory Statement

All the apps remind users they’re no substitute for human counselors but they are helpful in reflecting behavior patterns and emotional red flags back to their users. At the very least, it’ll help you make the most of your next therapy visit.

Smart cat shelter uses AI to let stray cats in during winter

For stray cats, winter is almost fatal. Using AI, a Baidu engineer has devised an AI Smart Cattery to shelter stray cats and help them survive Beijing’s cold winter.

It can accurately identify 174 different cat breeds, as to let them enter and exit as they please. A door will slide open if the camera spots a cat, but it won’t work on dogs. Multiple cats can fit inside the space.A fresh air system monitors the oxygen and carbon dioxide levels to ensure the small space is well-ventilated.

Another neat camera feature is that it can be also used to detect if the cat is sick — it can identify four common cat diseases, such as inflammation, skin problems, and physical trauma. Once a cat is identified as needing care, associated volunteers can be informed to come and collect it.

Why it’s Hot: A neat implementation of AI for good – it pushes us to think beyond using AI for just marketing purposes and lets us imagine it’s role in helping solve human (and animal) problems. 

 

AI Inspired Flavoring

After years of research, McCormick & Company and IBM have announced the creation of a new AI system that will help spice up the dinner experience. The platform uses machine learning to predict winning flavor combinations and will aid McCormick in developing new recipes faster.

This spring, McCormick will debut the first AI-developed flavors in a new product line named “One.” The new recipe mixes intended for easy one-dish protein and vegetable dinners will include Tuscan Chicken, Bourbon Pork Tenderloin, Farmers Market Chicken, Glazed Salmon and New Orleans Sausage.

The data that led to these flavors involved 40 years of McCormick’s proprietary collection of past product recipes and consumer flavor preference studies.

Why It’s Hot

While the brand partnership seems unexpected, it’s smart of McCormick to take all the data they’ve collected over the years as the leader in their space and put it to good use in product innovation.

Source: https://www.cnet.com/news/ai-gets-spicy-with-new-mccormick-flavors/ 

AI and Implicit Bias

Last weekend, AOC sounded the alarm about new research that found the facial recognition software Amazon is selling to law enforcement falls short on tests for accuracy and bias. According to the Washington Post’s reporting, researchers said Amazon’s algorithms misidentified the gender of darker-skinned women in about 30 percent of their tests. (Of course, Amazon promises that the facial recognition software in use is not the one tested by researchers.)

The problem stems from the sets of photos the algorithms were trained on — which skew heavily toward white men, the researchers said. And that caused AOC to sound the alarm on Twitter.

And if you’re really behind on implicit bias, please visit Harvard’s Project Implicit to learn more.

Why It’s Hot:

  1. For possibly the first time, Congress has a credible authority on technology and she’s on the House Oversight Committee so tech companies might want to take notice.
  2. As AI becomes real, we need to make sure we’re designing for each.

Source: Washington Post

tour the dali museum, with your host…DALI!

When Salvador Dali once said, “If someday I may die…I hope the people…will say, ‘Dali has died, but not entirely”, I’m not sure he knew how right he was. Using AI, his namesake museum in St. Petersburg, Florida has now “resurrected” Dali to welcome visitors, and provide commentary on his works as you move throughout the institution.

According to the museum, they did it by “pulling content from millions of frames of interviews with the artist and overlaying it onto an actor’s face–a digital mask, of sorts, that allowed the actor to appear as Dali whatever expression he made.” It also “cast another actor from Barcelona to ensure that the voice matched the countenance.”

Why it’s hot:

There’s no better experience if you want to learn about an individual and his/her art than to hear about it directly from that person. Especially when they’re as dynamic and memorable as Salvador Dali. Unfortunately, most individuals famous enough to have their own museum likely aren’t on hand to do that in person. Having a virtual Dali guide you through his works seems a perfect way to experience his brilliance as both an artist, and a human being.

[Source]

Finland’s AI Ambitions

Finland has set an ambitious goal to train 1% of its population (55,000 people) in the basics of AI. Their hope is that widespread technological expertise can help boost their economy and help them stay competitive in international markets, especially in the wake of Nokia’s decline.

In order to achieve this, they created a free online course called the Elements of AI. The course is made up of six parts, covering everything from machine learning to neural networks, and has a focus on practical, problem-solving applications.

The initiative has support of both the government and local businesses, with 250 companies vowing to train part or all of their workforce. So far, more than 10,500 people have graduated from the course.

Why It’s Hot

At a time when so many are afraid of how new technology will impact the current career landscape and are struggling to keep up with the pace of change, Finland’s idea to promote a free educational resource to equip its workforce is a smart move.

Source: https://www.technologyreview.com/the-download/612762/a-countrys-ambitious-plan-to-teach-anyone-the-basics-of-ai/ 

Hotel of the future

China’s e-commerce giant Alibaba Group opened its first “future hotel”, also known as “Flyzoo Hotel”, in Hangzhou, China.

Equipped with the latest leading technology, many futuristic features are enabled at the hotel, guests can check into the hotel without talking to anyone. They can walk straight to their rooms and get their faces scanned at the door to gain entry.

Robots can be found everywhere in the hotel, and they would guide the guests by providing recorded voice messages and accompany them during their stay. The guests can also control indoor temperatures, lighting intensity, household appliances through their voices.

A very notable device that the hotel is equipped with is called “Tmall Genie”, which is an AI management system. The system will take orders from guests, including buying groceries.

The one-meter high robot ‘Genie’ powered by Tmall, an AI system, follows guests around, takes orders, helps to buy groceries, orders meals, and picks up laundry through voice command, touch, or simple gestures.

Flyzoo Hotel Hangzhou 201811061912218939

Flyzoo Hotel Hangzhou 8ba9b831-4c01-42d4-ad77-7f19bba520e0

Why it’s hot: As a reply to high labor costs, creating uniformity in hospitality services and mixing up and re-imaging the hotel industry, this robot enabled hotel is smarter, more automated and an inspiration for future digital travelers.

Source

Postmates’ Food Delivery Robot

Postmates has introduced Los Angeles to Serve, a robot that will deliver food. Serve, which looks like a cooler on wheels with digital eyes, moves at walking speed and can carry up to 50lbs of food. In one charge, it can cover 30 miles.

Customers will be able to order food via the Postmates app, and then will receive a code to unlock the robot to retrieve their food when it arrives. They can also alert Postmates of any issues by interacting with Serve’s digital touch screen.

Serve is outfitted with lidar sensors to ensure it avoids obstacles, and uses a turn signal light to indicate to passersby that it is changing directions.

Postmastes calls Serve a socially aware navigation system, saying, “Serve’s personality is all about understanding people. Nothing about Serve’s intelligence is artificial.” In their announcement about their newest team member, they note that they are trying to be more city-friendly, as they will no longer be contributing to heavy street traffic.

Why It’s Hot

Postmastes has come up with a smart solution to enhance their delivery service while being environmentally conscious.

Source: https://www.technologyreview.com/the-download/612605/postmates-has-launched-a-delivery-robot-that-will-bring-lunch-to-your-door/

Powering customer journeys in the age of AI

 

Double exposure of Engineer or Technician man with business industrial tool icons, enguneer using tablet with industrial business concept. Industry 4.0 conceptAI is at the top of every executive’s to do list embarking on a digital transformation, however CIO’s are still trying to figure out how to maximize the full strength of artificial intelligence. Most companies don’t have a full understanding around the complexities of AI and therefore don’t have the right strategy in place to execute relevant and purposeful interactions with customers.

“So, how do businesses go about unlocking these information systems to make AI a reality? The answer is an API strategy. With the ability to securely share data across systems regardless of format or source, APIs become the nervous system of the enterprise. As a result of making appropriate API calls, applications that interact with AI models can now take actionable steps, based on the insights provided by the AI system — or the brain”

The key to building a successful AI-based platform is to invest in delivering consistent APIs that are easily discoverable and consumable by developers across the organization. Fortunately, with the emergence of API marketplaces, software developers don’t have to break a sweat to create everything from scratch. Instead, they can discover and reuse the work done by others internally and externally to accelerate development work.

Additionally, APIs help train the AI system by enabling access to the right information. APIs also provide the ability for AI systems to act across the entire customer journey by enabling a communication channel — the nervous system — with the broader application landscape. By calling appropriate APIs, developers can act on insights provided by the AI system. For example, Alexa or Siri cannot place an order for a customer directly in the back-end ERP system without a bridge. An API can serve as that bridge, as well as be reused for other application interactions to that ERP system down the road.

At their core, APIs are developed to play a specific role — unlocking data from legacy systems, composing data into processes or delivering an experience. By unlocking data that exists in siloed systems, businesses end up democratizing the availability of data across the enterprise. Developers can then choose information sources to train the AI models and connect the AI systems into the enterprise’s broader application network to take action.

Why it’s Hot?

If we can help our clients develop customer strategies in tandem with a strong data and API strategy then we’ll be able to deploy 1:1 interactions with customers like the example below.

“Businesses haven’t truly realized the full potential of AI systems at a strategic level, where they are building adaptive platforms that truly create differentiated value for their customers. Most organizations are leveraging AI to analyze large volumes of data and generate insights on customer engagement, though it’s not strategic enough. Strategic value can be realized when these AI systems are plugged into the enterprise’s wider application network to drive personalized, 1:1 customer journeys. With an API strategy in place, businesses can start to realize the full potential AI has to offer.”

 

 

 

China pumps AI-produced propaganda via humanoid virtual anchors

“Xinhua, China’s state-run press agency, has unveiled new “AI anchors” — digital composites created from footage of human hosts that read the news using synthesized voices.”

AI anchors have several advantages over human counterparts: they don’t need to sleep, eat, poop or take a salary.

Story on The Verge

Why It’s Hot

It’s a wholly frightening idea that the 24/7 news cycle will be reduced to this one day. As we struggle to define the line between real news and fake news, we will also have to grapple with fake news anchors.