Google’s DeepMind AI can now beat humans at 57 Atari games

Google subsidiary DeepMind has unveiled an AI called Agent57 that can beat the average human at 57 classic Atari games.

The system achieved this feat using deep reinforcement learning, a machine learning technique that helps an AI improve its decisions by trying out different approaches and learning from its mistakes.

In their blog post announcing the release, DeepMind trumpets Agent57 as the most general Atari57 agent since the benchmark’s inception, the one that finally obtains above human-level performance not only on easy games, but also across the most demanding games.

Why it’s hot:

By machines learning how to play these complex games, they will attain the capability of thinking and acting strategically.DeepMind’s general-purpose learning algorithms allow the machine to learn through gamification to try and acquire human-like intelligence and behavior.

Hands-free@Home

COVID-19 pandemic pushing sales of voice control devices

Sales of voice control devices are expected to experience a boom in growth, thanks to people being locked down and working from home. This is also expected to fuel growth in the broader ecosystem of smart home devices – as instructions to minimize contact with objects that haven’t been disinfected, make things like connected light switches, thermostats and door locks more appealing than ever.

Why It’s Hot:  A critical mass of device penetration and usage will undoubtedly make this a more meaningful platform for brands and marketers to connect and engage with consumers.

With so many millions of people working from home, the value of voice control during the pandemic will ensure that this year, voice control device shipments will grow globally by close to 30% over 2019–despite the key China market being impacted during the first quarter of 2020, according to global tech market advisory firm, ABI Research.

Woman Preparing Meal At Home Asking Digital Assistant Question

Last year, 141 million voice control smart home devices shipped worldwide, the firm said. Heeding the advice to minimize COVID-19 transmission from shared surfaces, even within a home, will help cement the benefits of smart home voice control for millions of consumers, ABI Research said.

“A smarter home can be a safer home,” said Jonathan Collins, ABI research director, in a statement. “Key among the recommendations regarding COVID-19 protection in the home is to clean and disinfect high-touch surfaces daily in household common areas,” such as tables, hard-backed chairs, doorknobs, light switches, remotes, handles, desks, toilets, and sinks.

Voice has already made significant inroads into the smart home space, Collins said. Using voice control means people can avoid commonly touched surfaces around the home from smartphones, to TV remotes, light switches, thermostats, door handles, and more. Voice can also be leveraged for online shopping and information gathering, he said.

When used in conjunction with other smart home devices, voice brings greater benefits, Collins said.

“Voice can be leveraged to control and monitor smart locks to enable deliveries to be placed in the home or another secure location directly or monitored securely on the doorstep until the resident can bring them in,” he said.

Similarly, smart doorbells/video cameras can also ensure deliveries are received securely without the need for face-to-face interaction or exposure, he added. “Such delivery capabilities are especially valuable for those already in home quarantine or for those receiving home testing kits,” Collins said.

He believes that over the long term, “voice control will continue to be the Trojan horse of smart home adoption.” Right now, the pandemic is part of the additional motivation and incentive for voice control in the home to help drive awareness and adoption for a range of additional smart home devices and applications, Collins said.

“Greater emphasis and understanding, and above all, a change of habit and experience in moving away from physical actuation toward using voice in the home will support greater smart home expansion throughout individual homes,” he said. “A greater emphasis on online shopping and delivery will also drive smart home device adoption to ensure those deliveries are securely delivered.”

The legacy of COVID-19 will be that the precautions being taken now will continue for millions of people who are bringing new routines into their daily lives in and around their homes and will for a long time to come, Collins said.

“Smart home vendors and system providers can certainly emphasize the role of voice and other smart home implementations to improve the day-to-day routines within a home and the ability to minimize contact with shared surfaces, as well as securing and automating home deliveries.”

Additionally, he said there is value in integrating smart home monitoring and remote health monitoring with a range of features, such as collecting personal health data points like temperature, activity, and heart rate, alongside environmental data such as air quality and occupancy. This can “help in the wider response and engagement for smart city health management,” Collins said.

Source: TechRepublic

Google and Oxford create ‘The A to Z of AI’ explainer

As machine learning and artificial intelligence usage proliferates in everyday products, there have been many attempts to make it easier to understand. The latest explainer comes from Google and the Oxford Internet Institute with “The A to Z of AI.”

At launch, the “A-Z of AI” covers 26 topics, including bias and how AI is used in climate science, ethics, machine learning, human-in-the-loop, and Generative adversarial networks (GANs).


The AI explainer from Google and Oxford will be “refreshed periodically, as new technologies come into play and existing technologies evolve.”

Why it’s hot:
AI is informing just about every facet of society. But AI is a thorny subject, fraught with complex terminology, contradictory information, and general confusion about what it is at its most fundamental level.

Coronavirus Researchers Are Using Technology to Predict the Viral Path

As Coronavirus fears spread and hand sanitizer and face masks fly off the shelves, the question is, how to we prevent and mitigate.

Researchers are looking to AI for the solution. “John Brownstein, chief innovation officer at Boston Children’s Hospital and a professor at Harvard Medical School, built a tool called Healthmap after SARS killed 774 people around the world in the mid-2000s, his team built a tool called Healthmap, which scrapes information about new outbreaks from online news reports, chatrooms and more. Healthmap then organizes that previously disparate data, generating visualizations that show how and where communicable diseases like the coronavirus are spreading. Healthmap’s output supplements more traditional data-gathering techniques used by organizations like the U.S. Centers for Disease Control and Prevention (CDC) and the World Health Organization (WHO). The project’s data is being used by clinicians, researchers and governments.”

https://www.arcgis.com/apps/opsdashboard/index.html#/bda7594740fd40299423467b48e9ecf6

https://healthmap.org/en/

Why it’s hot?

Data is magic! We need to use all the resources at our disposal to mitigate the effects of the epidemic.

Google AI no longer sees gender

Google has decided it wants to avoid potential gender bias in its AI system for identifying images, so it’s choosing to simply use the designator “person” instead.

From The Verge:

The company emailed developers today about the change to its widely used Cloud Vision API tool, which uses AI to analyze images and identify faces, landmarks, explicit content, and other recognizable features. Instead of using “man” or “woman” to identify images, Google will tag such images with labels like “person,” as part of its larger effort to avoid instilling AI algorithms with human bias.

“In the email to developers announcing the change, Google cited its own AI guidelines, Business Insider reports. “Given that a person’s gender cannot be inferred by appearance, we have decided to remove these labels in order to align with the Artificial Intelligence Principles at Google, specifically Principle #2: Avoid creating or reinforcing unfair bias.”

Why it’s hot:

It’s interesting to see AI companies grapple with the reality of human social life, and navigate the shifting waters of public mores.

Avoiding bias is a major issue in society, and it’s very important that the companies building AI don’t build their human bias into it. But with any new technology, there can be unintended and unpredictable consequences down the line, from even seemingly innocuous or universally accepted ideas.

Source: The Verge

Skincare + AI: Making Mass Personalization Easy

A skincare startup is tackling the complexity consumers face when navigating the category to select the best products for their skincare needs. Rather than adding to the clutter of products, ingredients and “proprietary formulas”, or attempting to educate consumers through exposure to research + science, Proven Skincare simply prescribes personalized solutions for each individual.

After collecting customer input based around 40 key factors, Proven Skincare’s AI combs through a comprehensive database of research, testimonials and dermatology expertise, to identify the best mix of ingredients for each person’s situation.

Ming Zhao, Proven’s CEO, co-founded the company while struggling with her own skincare issues.

“The paradox of choice, the confusion that causes this frustrating cycle of trial and error, is too much for most people to bear,” says Zhao on the latest edition of Ad Age’s Marketer’s Brief podcast. “There’s a lot of cycles of buying expensive product, only for it to then sit on somebody’s vanity shelf for months to come.”

As the human body’s largest organ, skin should be properly cared for—using products and ingredients that have been proven to work for specific individuals. That’s the core mission behind Proven Skincare, a new beauty company that has tapped technology to research the best skincare regimen for consumers.

Why It’s Hot: In a world where the benefits of things like AI and big data are not often apparent to the “average” person, this is an example of technology that solves a real human problem, while remaining invisible (i.e. it’s not about the tech).

Delta Air Lines bets on AI to help its operations run smoothly in bad weather

In its first-ever keynote at CES, Delta announced a new AI-driven system that will help it make smarter decisions when the weather turns tough and its finely tuned operations get out of whack. In a first for the passenger airline industry, the company built a full-scale digital simulation of its operations that its new system can then use to suggest the best way to handle a given situation with the fewest possible disruptions for passengers.

It’s no secret that the logistics of running an airline are incredibly complex, even on the best of days. On days with bad weather, that means airline staff must figure out how to swap airplanes between routes to keep schedules on track, ensure that flight crews are available and within their FAA duty time regulations and that passengers can make their connections.

“Our customers expect us to get them to their destinations safely and on time, in good weather and bad,” said Erik Snell, Delta’s senior vice president of its Operations & Customer Center. “That’s why we’re adding a machine learning platform to our array of behind-the-scenes tools so that the more than 80,000 people of Delta can even more quickly and effectively solve problems, even in the most challenging situations.”

The new platform will go online in the spring of this year, the company says, and, like most of today’s AI systems, will get smarter over time as it is fed more real-world data. Thanks to the included simulation of Delta’s operations, it’ll also include a post-mortem tool to help staff look at which decisions could have resulted in better outcomes.

Source: TechCrunch

Why It’s Hot

Delivering on best in class CX in the airline industry is a beast, and Delta has consistently tried to win here (as previous covered by Forrester CX index and the like). Why lacking in the super-cool-tech factor, widespread use of AI In the airline industry makes a ton of sense.

How Social Media Can Help Save Indonesians From Climate Disasters

28-year-old architect Nashin Mahtani’s website, PetaBencana.id, uses artificial intelligence and chat-bots to monitor and respond to social posts on Twitter, Facebook, and Telegram by communities in Indonesia hit by floods. The information is then displayed on a real-time map that is monitored by emergency services.

Image result for petabencana screenshot

“Jakarta is the Twitter capital of the world, generating 2% of the world’s tweets, and our team noticed that during a flood, people were tweeting in real-time with an incredible frequency, even while standing in flood waters,” said Mahtani, a graduate of Canada’s University of Waterloo. Jakarta residents often share information with each other online about road blockages, rising waters and infrastructure failures.

Unlike other relief systems that mine data on social media, PetaBencana.id adopts AI-assisted “humanitarian chat-bots” to engage in conversations with residents and confirm flooding incidents. “This allows us to gather confirmed situational updates from street level, in a manner that removes the need for expensive and time-consuming data processing,” Mahtani said.

 

In early 2020, the project will go nationwide to serve 250 million people and include additional disasters such as forest fires, haze, earthquakes and volcanoes.

Why It’s Hot

Aggregating social data in real-time on a map allows for easy flow of information between residents in need and emergency services who can help them. In a situation when every second counts to help as many people as possible, this use of technology is truly life-saving.

Source

Inside Amazon’s plan for Alexa to run your entire life

The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.

Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.

A more intelligent Alexa

Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.

But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.

Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.

In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.

This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.

To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.

Why It’s Hot:  “This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”

Adobe debuts latest effort in the misinformation arms race

Adobe has previewed an AI tool that analyzes the pixels of a image to determine the probability that it’s been manipulated and the areas in which it thinks the manipulation has taken place, shown as a heat map.

It’s fitting that the company that made sophisticated photo manipulation possible would also create a tool to help combat its nefarious use. While it’s not live in Adobe applications yet, it could be integrated into them, such that users can quickly know whether what their looking at is “real” or not.

Up next: The inevitable headline of someone creating a tool that can trick the Adobe AI tool into thinking photo is real.

Why it’s hot:

Fake news is a big problem, and this might help us get to the truth of some matters of consequence.

But … not everything can be solved with AI. This might help people convince others that something they saw is in fact fake, but it doesn’t overcome the deeper problem of people’s basic gullibility, lack of critical thinking, and strong desire to justify their already entrenched beliefs.

Source: The Verge

Google Claims a Quantum Breakthrough That Could Change Computing

Google said on Wednesday that it had achieved a long-sought breakthrough called “quantum supremacy,” which could allow new kinds of computers to do calculations at speeds that are inconceivable with today’s technology.

The Silicon Valley giant’s research lab in Santa Barbara, Calif., reached a milestone that scientists had been working toward since the 1980s: Its quantum computer performed a task that isn’t possible with traditional computers, according to a paper published in the science journal Nature.

A quantum machine could one day drive big advances in areas like artificial intelligence and make even the most powerful supercomputers look like toys. The Google device did in 3 minutes 20 seconds a mathematical calculation that supercomputers could not complete in under 10,000 years, the company said in its paper.

Scientists likened Google’s announcement to the Wright brothers’ first plane flight in 1903 — proof that something is really possible even though it may be years before it can fulfill its potential.

Still, some researchers cautioned against getting too excited about Google’s achievement since so much more work needs to be done before quantum computers can migrate out of the research lab. Right now, a single quantum machine costs millions of dollars to build.

Many of the tech industry’s biggest names, including Microsoft, Intel and IBM as well as Google, are jockeying for a position in quantum computing. And venture capitalists have invested more than $450 million into start-ups exploring the technology, according to a recent study.

China is spending $400 million on a national quantum lab and has filed almost twice as many quantum patents as the United States in recent years. The Trump administration followed suit this year with its own National Quantum Initiative, promising to spend $1.2 billion on quantum research, including computers.

A quantum machine, the result of more than a century’s worth of research into a type of physics called quantum mechanics, operates in a completely different manner from regular computers. It relies on the mind-bending ways some objects act at the subatomic level or when exposed to extreme cold, like the metal chilled to nearly 460 degrees below zero inside Google’s machine.

“We have built a new kind of computer based on some of the unusual capabilities of quantum mechanics,” said John Martinis, who oversaw the team that managed the hardware for Google’s quantum supremacy experiment. Noting the computational power, he added, “We are now at the stage of trying to make use of that power.”

On Monday, IBM fired a pre-emptive shot with a blog post disputing Google’s claim that its quantum calculation could not be performed by a traditional computer. The calculation, IBM argued, could theoretically be run on a current computer in less than two and a half days — not 10,000 years.

“This is not about final and absolute dominance over classical computers,” said Dario Gil, who heads the IBM research lab in Yorktown Heights, N.Y., where the company is building its own quantum computers.

Other researchers dismissed the milestone because the calculation was notably esoteric. It generated random numbers using a quantum experiment that can’t necessarily be applied to other things.

As its paper was published, Google responded to IBM’s claims that its quantum calculation could be performed on a classical computer. “We’ve already peeled away from classical computers, onto a totally different trajectory,” a Google spokesman said in a statement. “We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have.”

Source: NY Times

Why It’s Hot

It’s hard to even fathom what possibilities this opens, but it seems application is still a while away.

Your phone’s camera didn’t capture the moment. It computed it.

The way our cameras process and represent images is changing in a subtle but fundamental way, shifting cameras from ‘capturing the moment’ to creating it with algorithmic computations.

Reporting about the camera on Google’s new Pixel 4 smartphone, Brian Chen of the New York Times writes:

“When you take a digital photo, you’re not actually shooting a photo anymore.

‘Most photos you take these days are not a photo where you click the photo and get one shot,’ said Ren Ng, a computer science professor at the University of California, Berkeley. ‘These days it takes a burst of images and computes all of that data into a final photograph.’

Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.

Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.”

This technology is evident in Google’s Night Sight, which is capable of capturing low-light photos without a flash.

Why it’s hot: 

In a world where the veracity of photographs and videos is coming into question because of digital manipulation, it’s interesting that alteration is now baked in.

Immortalized in Film…? Not so fast.

Tencent Shows The Future Of Ads; Will Add Ads In Existing Movies, TV Shows

One of China’s largest online video platforms is setting out to use technology to integrate branded content into movies and TV shows from any place or era.

(Yes, a Starbucks on Tatooine…or Nike branded footwear for the first moonwalk.)

Why It’s Hot:  

  1. Potentially exponential expansion of available ad inventory
  2. Increased targetability by interest, plus top-spin of borrowed interest
  3. Additional revenue streams for content makers
  4. New questions of the sanctity of creative vision, narrative intent and historical truth

Advertising is an integral part of any business and with increasing competition, it’s more important than ever to be visible. Mirriad, a computer-vision and AI-powered platform company, recently announced its partnership with Tencent which is about the change the advertising game. If you didn’t know, Tencent is one of the largest online video platforms in China. So how does it change the advertising game, you ask?

Mirriad’s technology enables advertisers to reach their target audience by integrating branded content (or ads) directly into movies and TV series. So, for instance, if an actor is holding just a regular cup of joe in a movie, this new API will enable Tencent to change that cup of coffee into a branded cup of coffee. Matthew Brennan, a speaker and a writer who specialises in analysing Tencent & WeChat shared a glimpse of how this tech works.

While we’re not sure if these ads will be clickable, it’ll still have a significant subconscious impact, if not direct. Marketers have long talked of mood marketing that builds a personal connection between the brand and the targeted user. So, with the ability to insert ads in crucial scenes and moments, advertisers will now be able to engage with their target users in a way that wasn’t possible before.

Mirriad currently has a 2-year contract with Tencent where they’ll trial exclusively on the latter’s video platform. But if trials are successful in that they don’t offer a jarring viewing experience, we can soon expect this tech to go mainstream.

How We are AI – by NY Times

Would be hard to summarize this in-depth article/expose from NYT, but…

A.I. Is Learning From Humans. Many Humans.

Artificial intelligence is being taught by thousands of office workers around the world. It is not exactly futuristic work.

  • A.I., most people in the tech industry would tell you, is the future of their industry, and it is improving fast thanks to something called machine learning. But tech executives rarely discuss the labor-intensive process that goes into its creation. A.I. is learning from humans. Lots and lots of humans.
  • Before an A.I. system can learn, someone has to label the data supplied to it. Humans, for example, must pinpoint the polyps. The work is vital to the creation of artificial intelligence like self-driving carssurveillance systems and automated health care.

  • Tech companies keep quiet about this work. And they face growing concerns from privacy activists over the large amounts of personal data they are storing and sharing with outside businesses.

  • Tens of thousands more workers, independent contractors usually working in their homes, also annotate data through crowdsourcing services like Amazon Mechanical Turk, which lets anyone distribute digital tasks to independent workers in the United States and other countries. The workers earn a few pennies for each label.

    Based in India, iMerit labels data for many of the biggest names in the technology and automobile industries. It declined to name these clients publicly, citing confidentiality agreements. But it recently revealed that its more than 2,000 workers in nine offices around the world are contributing to an online data-labeling service from Amazon called SageMaker Ground Truth. Previously, it listed Microsoft as a client.

    One day, who knows when, artificial intelligence could hollow out the job market. But for now, it is generating relatively low-paying jobs. The market for data labeling passed $500 million in 2018 and it will reach $1.2 billion by 2023, according to the research firm Cognilytica. This kind of work, the study showed, accounted for 80 percent of the time spent building A.I. technology.

    This work can be so upsetting to workers, iMerit tries to limit how much of it they see. Pornography and violence are mixed with more innocuous images, and those labeling the grisly images are sequestered in separate rooms to shield other workers, said Liz O’Sullivan, who oversaw data annotation at an A.I. start-up called Clarifai and has worked closely with iMerit on such projects.“I would not be surprised if this causes post-traumatic stress disorder — or worse. It is hard to find a company that is not ethically deplorable that will take this on,” she said. “You have to pad the porn and violence with other work, so the workers don’t have to look at porn, porn, porn, beheading, beheading, beheading

     Source: NYT

Why It’s Hot: All this tech-first talk of AI, this was FASCINATING to me. I did not know this was the reality of “training AI.”

Phone a Friend: a mobile app for predicting teen suicide attempts

Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.

Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.

Why It’s Hot

We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.

Selfies Get Serious: Introducing the 30-second selfie full-fitness checkup

Keeping an eye on subtle changes in common health risks is not an easy task for the average person. Yet, by the time real symptoms are obvious, it’s often too late to take the kind of action that would prevent a problem from snow-balling.

Researchers at the University of Toronto have developed an app that appears capable of turning a 30-second selfie into a diagnostic tool for quantifying a range of health risks.

“Anura promises an impressively thorough physical examination for just half a minute of your time. Simply based on a person’s facial features, captured through the latest deep learning technology, it can assess heart rate, breathing, stress, skin age, vascular age, body mass index (yes, from your face!), Cardiovascular disease, heart attack and stroke risk, cardiac workload, vascular capacity, blood pressure, and more.”

It’s easy to be skeptical about the accuracy of results possible from simply looking at a face for 30 seconds, but the researchers have demonstrated accuracy of measuring blood pressure up to 96% – and when the objective is to give people a way of realizing when it might be time to take action, that level of accuracy may actually be more than enough.

Why It’s Hot

For marketers looking to better identify the times, places and people for whom their products and services are likely to be most relevant, the convergence of biometrics with advanced algorithms and AI – all in a device most people carry around with them every day – could be a game-changer.

(This also brings up perennial issues of privacy & personal information, and trade-offs we need to make for the benefits emerging tech provides.)

The AI Drone Crocodile Hunter

Last summer, Australia began testing drones at their beaches to help spot distressed swimmers – acting as overhead lifeguards. Now the same company that created that technology, Ripper Group, is creating an algorithm for their drones to spot crocodiles.

While not frequent, crocodile attacks have gone up in recent years. And crocodiles are not easily identified when they spend up to 45 minutes under murky water. So the Ripper Group is using machine learning to train drones to distinguish crocodiles from 16 other marine animals, boats, and humans through a large database of images.

The drones also include warning sirens and flotation devices for up to four people, to assist in emergency rescue when danger is spotted.

Why It’s Hot

Lifeguards are limited in what they can see and how quickly they can act. With the assistance of drones, beach goers can stay carefree.

Source

Make getting drunk great again

British data science company DataSparQ has developed facial recognition-based AI technology to prevent entitled bros from cutting the line at bars. This “technology puts customers in an ‘intelligently virtual’ queue, letting bar staff know who really was next” and who’s cutting the line.

“The system works by displaying a live video of everyone queuing on a screen above the bar. A number appears above each customer’s head — which represents their place in the queue — and gives them an estimated wait time until they get served. Bar staff will know exactly who’s next, helping bars and pubs to maximise their ordering efficiency and to keep the drinks flowing.”

Story on Endgadet

Why it’s Hot

Using AI to help solve these types of trifling irritations is better than having to tolerate other people’s sense of entitlement, though it also highlights the need to police rude behavior through something other than raising your kids well.

Retail wants a Minority Report for returns

In what now seems inevitable, an online fashion retailer in India owned by an e-commerce startup that’s backed by Walmart is doing research with Deep Neural Networks to predict which items a buyer will return before they buy the item.

With this knowledge, they’ll be better able to predict their returns costs, but more interestingly, they’ll be able to incentivize shoppers to NOT return as much, using both loss and gain offers related to items in one’s cart.

The nuts and bolts of it is: the AI will assign a score to you based on what it determines your risk of returning a specific item to be.This data could be from your returns history, as well as less obvious data points, such as your search/shopping patterns elsewhere online, your credit score, and predictions about your size and fit based on aggregated data on other people.

Then it will treat you differently based on that assessment. If you’re put in a high risk category, you may pay more for shipping, or you may be offered a discount in order to accept a no-returns policy tailored just for you. It’s like car insurance for those under 25, but on hyper-drive. If you fit a certain demo, you may start paying more for everything.

Preliminary tests have shown promise in reducing return rates.

So many questions:

Is this a good idea from a brand perspective? If this becomes a trend, will retailers with cheap capital that can afford high-returns volume smear this practice as a way to gain market share?

Will this drive more people to better protect their data and “hide” themselves online? We might be OK with being fed targeted ads based on our data, but what happens when your data footprint and demo makes that jacket you wanted cost more?

Will this encourage more people to shop at brick and mortar stores to sidestep retail’s big brother? Or will brick and mortar stores find a way to follow suit?

How much might this information flow back up the supply chain, to product design, even?

Why it’s hot

Returns are expensive for retailers. They’re also bad for the environment, as many returns are just sent to the landfill, not to mention the carbon emissions from sending it back.

So, many retailers are scrambling to find the balance between reducing friction in the buying process by offering easy returns, on the one hand, and reducing the amount of actual returns, on the other.

There’s been talk of Amazon using predictive models to ship you stuff without you ever “buying” it. You return what you don’t want and it eventually learns what you want to the point where you just receive a box of stuff at intervals, and money is extracted from your bank account. This also might reduce fossil fuels.

How precise can these predictive models get? And how might people be able to thwart them? Is there a non-dystopian way to reduce returns?

Source: ZDNet

A monkey has been able to control a computer with his brain


Neuralink graphic
N1 sensor.
The N1 array in action.

Neuralink, the Elon Musk-led startup that the multi-entrepreneur founded in 2017, is working on technology that’s based around “threads,” which it says can be implanted in human brains with much less potential impact to the surrounding brain tissue versus what’s currently used for today’s brain-computer interfaces. “Most people don’t realize, we can solve that with a chip,” Musk said to kick off Neuralink’s event, talking about some of the brain disorders and issues the company hopes to solve.

Musk also said that, long-term, Neuralink really is about figuring out a way to “achieve a sort of symbiosis with artificial intelligence.” He went on to say, “This is not a mandatory thing. This is something you can choose to have if you want.”

For now, however, the aim is medical, and the plan is to use a robot that Neuralink has created that operates somewhat like a “sewing machine” to implant this threads, which are incredibly thin (like, between 4 and 6 μm, which means about one-third the diameter of the thinnest human hair), deep within a person’s brain tissue, where it will be capable of performing both read and write operations at very high data volume.

These probes are incredibly fine, and far too small to insert by human hand. Neuralink has developed a robot that can stitch the probes in through an incision. It’s initially cut to two millimeters, then dilated to eight millimeters, placed in and then glued shut. The surgery can take less than an hour.

No wires poking out of your head
It uses an iPhone app to interface with the neural link, using a simple interface to train people how to use the link. It basically bluetooths to your phone,” Musk said.

Is there going to be a brain app store ? Will we have ads in our brain?
“Conceivably there could be some kind of app store thing in the future,” Musk said. While ads on phones are mildly annoying, ads in the brain could be a disaster waiting to happen.

Why it’s hot?
A.I.: you won’t be able to beat it, so join it
Interfacing our brains with machines may save us from an artificial intelligence doomsday scenario. According to Elon Musk, if we want to avoid becoming the equivalent of primates in an AI-dominated world, connecting our minds to computing capabilities is a solution that needs to be explored.

“This is going to sound pretty weird, but [we want to] achieve a symbiosis with artificial intelligence,” Musk said. “This is not a mandatory thing! This is a thing that you can choose to have if you want. I think this is going to be something really important at a civilization-scale level. I’ve said a lot about A.I. over the years, but I think even in a benign A.I. scenario we will be left behind.”

Think about the kind of “straight from the brain data” we would have at our disposal and how will we use it?

 

 

Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Amazon is rolling out StyleSnap, its AI-enabled shopping feature that helps you shop from a photograph or snapshot. Consumers upload images to the Amazon app and it considers factors like brand, price and reviews to recommend similar items.

Amazon has been able to leverage data from brands sold on its site to develop products that are good enough or close enough to the originals, usually at lower price points, and thereby gain an edge, but its still only a destination for basics like T-shirts and socks. With StyleSnap, Amazon is hoping to further crack the online retailing sector with this new offering.

Why It’s Hot

Snapping and sharing is already part of retail culture, and now Amazon is creating a simple and seamless way of adding the shop and purchase to this ubiquitous habit.  The combination of AI and user reviews in its algorithm could change the way we shop when recommendations aren’t only based on the look of an item, but also on how customers experience it.

 

Source: Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Other sources: https://www.cnet.com/news/amazon-stylesnap-uses-ai-to-help-you-shop-for-clothes/

Applying AI for Social Good

By Ankita Pamnani

Interest in Artificial Intelligence (AI) has dramatically increased in recent years and AI has been successfully applied to societal challenge problems. It has a great potential to provide tremendous social good in the future.

Real-life examples of AI are already being applied in about one-third of these use cases, albeit in relatively small tests. They range from diagnosing cancer to helping blind people navigate their surroundings, identifying victims of online sexual exploitation, and aiding disaster-relief efforts.

AI has a broad potential across a range of social domains.

  • Education
    • These include maximizing student achievement and improving teachers’ productivity. For example, adaptive-learning technology could base recommended content to students on past success and engagement with the material.
  • Public and Social Sector
  • Economic Empowerment
    • With an emphasis on currently vulnerable populations, these domains involve opening access to economic resources and opportunities, including jobs, the development of skills, and market information. For example, AI can be used to detect plant damage early through low-altitude sensors, including smartphones and drones, to improve yields for small farms.
  • Environment
    • Sustaining biodiversity and combating the depletion of natural resources, pollution, and climate change are challenges in this domain.

Some of the issues that we are currently facing with social data

  • Data needed for social-impact uses may not be easily accessible
    • Much of the data essential or useful for social-good applications are in private hands or in public institutions that might not be willing to share their data. These data owners include telecommunications and satellite companies; social-media platforms; financial institutions (for details such as credit histories); hospitals, doctors, and other health providers (medical information); and governments (including tax information for private individuals).
    • The expert AI talent needed to develop and train AI models is in short supply
      • The complexity of problems increases significantly when use cases require several AI capabilities to work together cohesively, as well as multiple different data-type inputs. Progress in developing solutions for these cases will thus require high-level talent, for which demand far outstrips supply and competition is fierce.
    • ‘Last-mile’ implementation challenges are also a significant bottleneck for AI deployment for social good
      • Organizations may also have difficulty interpreting the results of an AI model. Even if a model achieves a desired level of accuracy on test data, new or unanticipated failure cases often appear in real-life scenarios.

 

McDonald’s Personalizes the Drive-Thru Menu

Next time you pull up to a McDonald’s drive-thru, you might see exactly what you’re craving front and center. Menus will be personalized based on factors like weather, local events, restaurant traffic, and trending items.

This new technology will be powered by their acquisition of personalization company Dynamic Yield. The menu can be programmed against triggers with scenarios such as offering ice cream and iced coffee when the temperature rises above 80 degrees, or pushing hot chocolate when it starts to rain.

Once a person starts ordering, the menu will offer add-ons based on the previous selections made. For example, a person ordering a salad may be offered a smoothie instead of fries.

Why It’s Hot

McDonald’s already builds off of customer’s cravings. Now that these cravings can be predicted, personalized, and optimized over time, there’s a high likelihood that customers will be ordering more at the drive-thru window.

DeepMind? Pffft! More like “dumb as a bag of rocks.”

Google’s DeepMind AI project, self-described as “the world leader in artificial intelligence research” was recently tested against the type of math test that 16 year olds take in the UK. The result? It only scored a 14 out of 40 correct. Womp womp!

“The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it.” (Medium)

Image result for home d'oh

Why It’s Hot

There is no shortage of angst by humans worried about losing their jobs to AI. Instead of feeling a reprieve, humans should take this as a sign that AI might just be best designed to complement human judgements and not to replace them.

AI Voice Assistant

Such tasks, historically performed by a personal assistant or secretary, include taking dictation, reading text or email messages aloud, looking up phone numbers, scheduling, placing phone calls and reminding the end user about appointments. Popular virtual assistants currently include Amazon Alexa, Apple’s SiriGoogle Now and Microsoft’s Cortana — the digital assistant built into Windows Phone 8.1 and Windows 10.

Why it’s hot:

  • Intelligent Personal Assistant: This is software that can assist people with basic tasks, usually using natural language. Intelligent personal assistants can go online and search for an answer to a user’s question. Either text or voice can trigger an action.

  • Smart Assistant: This term usually refers to the types of physical items that can provide various services by using smart speakers that listen for a wake word to become active and perform certain tasks. Amazon’s Echo, Google’s Home, and Apple’s HomePod are types of smart assistants.

  • Virtual Digital Assistants: These are automated software applications or platforms that assist the user by understanding natural language in either written or spoken form.

  • Voice Assistant: The key here is voice. A voice assistant is a digital assistant that uses voice recognition, speech synthesis, and natural language processing (NLP) to provide a service through a particular application.

Tractica is a market intelligence firm that focuses on human interaction with technology. Their reports say unique consumer users for virtual digital assistants will grow from more than 390 million worldwide users in 2015 to 1.8 billion by the end of 2021. The growth in the business world is expected to increase from 155 million users in 2015 to 843 million by 2021. With that kind of projected growth, revenue is forecasted to grow from $1.6 billion in 2015 to $15.8 billion in 2021.

At Unilever, Resumes are Out – Algorithms are In

The traditional hiring process for companies, especially large organizations, can be exhaustive and often ineffective, with 83% of candidates rating their experience as “poor” and 30-50% of candidates chosen by the company end up failing.

Unilever recruits more than 30,000 people a year and processes around 1.8 million job applications. As you can imagine, this takes a tremendous amount of time and resources and too often talented candidates are overlooked just because they’re buried at the bottom of a pile of CVs. To tackle this problem, Unilever partnered with Pymetrics, an online platform on a mission to make the recruiting process more predictive and less biased than traditional methods.

Candidates start the interview process by accessing the platform at home from a computer or mobile-screen, and playing a selection of games that test their aptitude, logic and reasoning, and appetite for risk. Machine learning algorithms are then used to assess their suitability for whatever role they have applied for, by matching their profiles against those of previously successful employees.

The second stage of the process involves submitting a video interview that is reviewed not by a human, but a machine learning algorithm. The algorithm examines the videos of candidates who answer various questions, and through a mixture of natural language processing and body language analysis, determines who is likely to be a good fit.

One of the most nerve-wracking aspects of the job interview process can be anticipation of the feedback loop, or lack thereof – around 45% of job candidates claim they never hear back from a prospective employer. But with the AI-powered platform, all applicants get a couple of pages of feedback, including how they did in the game, how they did in the video interviews, what characteristics they have that fit, and if they don’t fit, the reason why they didn’t, and what they believe they should do to be successful in a future application.


Why it’s hot:
  Making experiences, even hiring experiences, feel more human with AI – The existing hiring process can leave candidates feeling confused, abandoned, and disadvantaged. Using AI and deep analysis helps hiring managers see candidates for who they are, outside of their age, gender, race, education, and socioeconomic status. Companies like Unilever aren’t just reducing their recruiting costs and time to hire- they’re setting an industry precedent that a candidate’s potential to succeed in the future doesn’t lie in who they know, where they came from or how they appear on paper.[Source: Pymetrics]

Breaking the Bias One Translation at a Time

The words we use daily can directly affect our perception and the way we think. For example, the effect of gender bias on language can influence how both women and men see certain professions. The terms cameraman, fireman and policeman, for example, are perceived as more masculine, while words like midwife are more stereotypically feminine.

Source: https://www.contagious.io/articles/what-do-you-mean

Released on International Women’s Day 2019, ElaN Languages has created The Unbias Button, to translate biased words, such as job titles, into gender-neutral ones.

Why it’s hot: This is a subtle way to change our awareness of the words we use on a daily basis.

Going Paperless in a Brick and Mortar

Lush is known for its colorful soaps and bath bombs, but the brand has consistently prioritized going green above all else—and its very first SXSW activation was no exception.

The brand set up its bath bomb pop-up to showcase its 54 new bath bomb creations using absolutely no signage. Instead, attendees could download the Lush Labs app, which uses AI and machine learning to determine what each bath bomb is with just a quick snapshot. “At Lush, we care about sustainability, and we wanted to take that same lens … and apply it to the way we are using technology,” Charlotte Nisbet, global concept lead at Lush, told Adweek.

Nisbet explained that three decades ago, Lush co-founder Mo Constantine invented the bath bomb when brainstorming a packaging-free alternative to bubble bath. (The new bath bombs are being released globally on March 29 in celebration of 30 years since Constantine created the first bath bomb in her garden shed in England.)

“But we were still facing the barrier to being even more environmentally friendly with packaging and signage in our shops,” Nisbet said.

Enter the Lush Lens feature on the Lush Labs app, which lets consumers scan a product with their phone to see all the key information they’d need before making a purchase: price, ingredients and even videos of what the bath bomb looks like when submerged in water. “This means that not only can we avoid printing signage that will eventually need to be replaced, but also that customers can get information on their products anytime while at home,” Nisbet said.

Why It’s Hot

The application sounds cool but is this a sustainable direction for more stores to take? As brick and mortar stores continue to struggle, we could see many start to experiment with ways to bring digital experiences to consumers already plugged into their smartphones in retail spaces.

Source: Adweek

Woebot – Highly Praised App for Mental Health

AI counseling is the wave of the future. Cognitive Behavioral Therapy administered by a smart chatbot, via an app relying on SMS, has become highly popular and well reviewed. Woebot isn’t just the face of a trend, it’s a notable player in technology transforming healthcare.

Why It’s Hot

It’s not new. It’s better. The first counseling software was called Eliza. It was ~1966. Part of the difficulty was it required human intervention. Ironically, in 2019 when many believe a lack of human contact to be part of the problem, that void actually addresses a barrier in therapy. Perceived lack of anonymity and privacy. Sure therapist visits are confidential blah blah but people naturally have difficulty opening up in person. Plus there’s the waiting room anxiety. With an app, studies have shown that people get to the heart of their problem quicker.

Why it Matters

There’s a ton of demand for “talk therapy” and others. Human counselors can’t keep up. People wait weeks and months for appointments. That’s in the U.S. where they’re compensated well. In this On Demand age, that’s seen as unacceptable. Woebot, and others, address the market need for immediate gratification care. Another issue is cost. Therapy is expensive. Apps are obviously a solve here. No co-pay.

Obligatory Statement

All the apps remind users they’re no substitute for human counselors but they are helpful in reflecting behavior patterns and emotional red flags back to their users. At the very least, it’ll help you make the most of your next therapy visit.

Smart cat shelter uses AI to let stray cats in during winter

For stray cats, winter is almost fatal. Using AI, a Baidu engineer has devised an AI Smart Cattery to shelter stray cats and help them survive Beijing’s cold winter.

It can accurately identify 174 different cat breeds, as to let them enter and exit as they please. A door will slide open if the camera spots a cat, but it won’t work on dogs. Multiple cats can fit inside the space.A fresh air system monitors the oxygen and carbon dioxide levels to ensure the small space is well-ventilated.

Another neat camera feature is that it can be also used to detect if the cat is sick — it can identify four common cat diseases, such as inflammation, skin problems, and physical trauma. Once a cat is identified as needing care, associated volunteers can be informed to come and collect it.

Why it’s Hot: A neat implementation of AI for good – it pushes us to think beyond using AI for just marketing purposes and lets us imagine it’s role in helping solve human (and animal) problems.