The Countdown Begins…AI Versus the World

AI is continuing to rule the press headlines across all industries. No matter who you are or what you do, your life will somehow be affected by artificial intelligence. Below are just a few charts recently published by the Electronic Frontier Foundation on how quickly AI is catching up with humans.


Why It’s Hot:

Artificial intelligence will continue to get better over time. So much so that Researchers at Oxford and Yale  predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.

Google giveth, and Google taketh away

Google is playing with my heart again.

Earlier this week Google announced that it will stop scanning the contents of Gmail in order to deliver targeted ads. Google said it’s stopping this practice in order to “more closely align” its business and consumer products. Businesses – who pay for G Suite – have the power to put their foot down where consumers do not.

At the same time Google announced it is launching an auto-reply system that scans emails and generates possible responses to choose from.

gmail

The new functionality, added to the app store versions of Gmail, works by analyzing a large, anonymized body of email to generate possible responses. Machine-learning systems then rank these to pick the “best responses to the email at hand”. Google is keen to emphasise that its system knows its limits. Not everything merits an automated response – only about one-third of emails are covered.

Most email is unnecessary and most email responses are perfunctory acknowledgements – verbal read-receipts. In the war for control of your inbox, Gmail may have given us an important missile defence shield. Nice! Thanks! Love it!

https://www.theguardian.com/technology/shortcuts/2017/jun/27/nice-thanks-love-it-gmails-auto-reply-is-perfect-for-the-lazy-emailer?CMP=Share_iOSApp_Other

Why It’s Hot (or not)
As the behemoths continue to get bigger, their power to impact the ways we interact (or not) continues to grow. The war between ease and humanity continues.

Food Computers Use AI To Make ‘Climate Recipes’ For The Best-Tasting Crops

It’s no surprise that climate change is inciting detrimental effects on our planet, but one of the most troubling is its effect on agriculture. The MIT Media Lab is hoping to remedy this by using special “food computers” to create the perfect climates for growing food, no matter the location or time of year. That means that not only could countries farm their local crops all year round, but they could also grow crops that are not native to their region of the world, meaning they could have fresh produce on-demand. Say goodbye to having to wait for shipments!

The Open Agriculture Initiative Personal Food Computer was first created in 2015, and can study and replicate the best growing conditions for specific plants with the use of sensors, actuators and machine vision. The Personal Food Computer can alter the light, nutrients and salinity of water. As the computer watches a plant, like basil, grow, it picks up data that can be used on the next set of crops. The research team is also trying to make the food itself tastier by maximizing the number of volatile molecules inside the crop, which is made possible by leaving the computer on constantly.

Babak Hodjat, CEO of Sentient says it’s all about engineering food in a totally different way: “Ultimately, this is non-GMO GMO. You’re not messing with the plant’s DNA. You’re just allowing it to exhibit the behavior it would in nature should that kind of environment exist.”

Source: PSFK

Why it’s Hot

Rolling with the punches, so to speak. In the case of environmental change, we can adapt. Looking at something like this at scale — could be an innovation that shifts how we approach agriculture and could also inspire additional environmental innovation.

When will AI outperform humans at work?

352 AI experts forecast a 50% chance AI will outperform humans in all tasks within 45 years, and take all jobs from humans within 120 years. Many of the world’s leading experts on machine learning were among those they contacted, including Yann LeCun, director of AI research at Facebook, Mustafa Suleyman from Google’s DeepMind and Zoubin Ghahramani, director of Uber’s AI labs.


Get the full research document HERE. Go to page 14 to get details on predictions

Are You There Arthur? It’s Me, Marcel

Two days ago, Publicis Groupe CEO and chairman Arthur Sadoun announced that his network would be foregoing all awards, trade shows and other paid promotional efforts for more than a year while developing Marcel, an AI powered “professional assistant” that it plans to launch next June at the 2018 VivaTech conference in Paris.

This well-timed announcement, right in the middle of Cannes, has generated an huge flurry of press, with speculation about the real driver of the announcement ranging from “It’s just a publicity stunt” to ” “Make no mistake, this is purely about saving money in 2018 as growth has slowed to a crawl” to “It’s smart. Award shows are a misguided way to stroke a few people’s egos. On top of that there’s a ton of work being done for the sole purpose of winning awards. And the number of shows is ridiculous too.”

Regardless of intent, sticking to plan will be difficult. Creatives across Publicis are reportedly up in arms. Surely, the lack of opportunity to stock a trophy case will make it more difficult for Publicis to attract some top creative talent. And, of course, clients like awards too. Poaching is a real concern.

Having gotten sucked into the drama, we’ve read it all, and this response, from R/GA is a favorite. Quick wit has earned R/GA a share of Publicis’ spotlight!

Why it’s Hot: Publicity stunts. All the cool agencies are doing it.

Amazon is rolling out a Dash Wand with Alexa to make you buy everything

ake Amazon wants its Prime subscribers ordering from its online store all the time, so it just cooked up a new device to help them do exactly that — and it’s essentially giving it away for free.

The company just launched a new instant-ordering gadget, the Dash Wand, that lets you fill up your Amazon shopping cart by using voice commands or scanning barcodes on the packages you have sitting in your kitchen cupboards.

The Dash Wand is essentially an updated version of the OG Amazon Dash wand that debuted in 2015, but this newer version crucially adds Amazon’s artificially intelligent assistant, Alexa, to help out. The digital assistant can sync your shopping list across Amazon devices, convert units of measurement, and search for recipes.

This is a huge upgrade for Amazon’s instant-ordering devices. The original Dash was significantly bigger, cost more than twice as much as this new one, and only worked with AmazonFresh orders.

Amazon’s really pushing the Wand, offering a similar deal to previous promotions for its instant ordering Dash buttons. If you buy a Dash Wand for $20,  you’ll qualify immediately for $20 credit for your next purchase after registering the device. It literally pays for itself — and you can opt-in for a free 90-day AmazonFresh trial, which typically costs $15 per month. It’s actually a pretty great deal for anyone with a Prime subscription.

The Wand is also magnetic, so it can live on your fridge close to all of your most frequently ordered foods, and its Alexa access makes it more useful than the Dash buttons, which are restricted to one item instant ordering.

You don’t get the full Alexa experience here, though. The Wand can’t play music, and its press-button functionality means it won’t automatically respond to the genial “Hey, Alexa” wake command.

It might sound ridiculous that the company is essentially giving the Wands away with all the discounts and incentives, but it’s a savvy business move. Making the shopping experience easier and offering a new Alexa toy to play with will only drive up orders, as if Amazon needs any help to keep its business afloat.

Source: Mashable

Why It’s Hot

Connected AI experiences make the virtual assistant craze more useful. Amazon is pushing forward on many different ways to connect Alexa with other platforms, and this is a great example of a type of utility that in a few years we will wonder how we lived without.

 

Start brushing off your resume…

The Mirai is Toyota’s car of the future. It runs on hydrogen fuel cells, gets 312 miles on a full tank and only emits water vapor. So, to target tech and science enthusiasts, the brand is running thousands of ads with messaging crafted based on their interests.

The catch? The campaign was written by IBM’s supercomputer, Watson. After spending two to three months training the AI to piece together coherent sentences and phrases, Saatchi LA began rolling out a campaign last week on Facebook called “Thousands of Ways to Say Yes” that pitches the car through short video clips.

Saatchi LA wrote 50 scripts based on location, behavioral insights and occupation data that explained the car’s features to set up a structure for the campaign. The scripts were then used to train Watson so it could whip up thousands of pieces of copy that sounded like they were written by humans.

http://www.adweek.com/digital/saatchi-la-trained-ibm-watson-to-write-thousands-of-ads-for-toyota/

Why It’s Hot
May let us focus more on the design; less on the production.

holograms, benjamin…

Some genius developer has boldly chosen to experiment with perhaps the world’s most forgotten voice assistant, Microsoft Cortana, and imagined what interacting with her could be like if you added another dimension to it.

In his words – “It’s basically what I imagined Microsoft’s version of Alexa or Google Home would be like if they were to use the holographic AI sidekick from the Halo franchise.”

As seen in the video above, in his prototype, it’s as if you’re speaking to an actual artificial person, making the experience feel more human.

Why it’s hot:
Amazon recently released the Echo Show, which allows skillmakers to add a “face” to their interactions, but this makes that look like a kids toy. This shows how what started not long ago as primitive voice technology on a phone, could quickly turn into actual virtual assistants that look and act like humans, powered by the underlying technology. Plus, apparently 145 million people may not ignore they have access to Cortana in the future.

Chess, Go, StarCraft

Photo from MIT Technology Review – Professional StarCraft player Byun Hyun Woo playing in the 2016 StarCraft II World Championship Series, which he won.

Scientists continue to train AI to compete professionally in classical strategic games like Chess and Go as a sort of basic Turing Test. Now that AI have shown their ability to out-maneuver humans in the latter examples, some consider StarCraft – a strategic multi-player game where players can compete to dominate the map as an alien race – to be AI’s next challenge.

“When you play StarCraft, you have to respond very quickly to lots of uncertainties and variables, but I’ve noticed that AI like AlphaGo isn’t that good at reacting to unexpected scenarios,” Byun says.

Hot

A StarCraft victory for an AI trained via reinforcement-learning would be proof that its intelligence is capable of executing both long and short-term decisions on the fly – and would bring AI one step closer to human-like decision making.

Full article on MIT Technology Review

googler creates AI that creates video using one image…

One of the brilliant minds at Google has developed an algorithm that can (and has) create video from a single image. The AI does this by predicting what each of the next frames would be based on the previous one, and in this instance did it 100,000 times to produce the 56 minute long video you see above. Per its creator:

“I used videos recorded from trains windows, with landscapes that moves from right to left and trained a Machine Learning (ML) algorithm with it. What you see at the beginning is what the algorithm produced after very little learnings. It learns more and more during the video, that’s why there are more and more realistic details. Learnings is updated every 20s. The results are low resolution, blurry, and not realistic most of the time. But it resonates with the feeling I have when I travel in a train. It means that the algorithm learned the patterns needed to create this feeling. Unlike classical computer generated content, these patterns are not chosen or written by a software engineer.

Why it’s hot:

Creativity and imagination have been among the most inimitable human qualities since forever. And anyone who’s ever created anything remotely artistic will tell you inspiration isn’t as easy as hitting ‘go’. While this demonstration looks more like something you’d see presented as an art school video project than a timeless social commentary regaled in a museum, it made me wonder – what if bots created art? Would artists compete with them? Would they give up their pursuit because bots can create at the touch of a button? Would this spawn a whole new area of human creativity out of the emotion of having your work held up next to programmatic art? Could artificial intelligence ever create something held up against real human creativity?

Who programs the AI? (Not women or people of color)

You might assume that technology and AI are neutral forces in this world. The truth is, our technology is biased and created in the image of its creators – as Melinda Gates and Fei-Fei Li argue in this interview, these are “guys with hoodies.”

Have you ever?

  • Tried on an Oculus Rift to find that the hardware does not fit your facial profile?
  • Had face tracking software totally fail because it wasn’t programmed to register your traits (standard human features such as eyes, a nose, a mouth)?
  • Had voice assistants / voice recognition not understand you due to your accent or dialect? Perhaps the voice assistant straight up doesn’t speak your native language.

Consider: Her and Ex Machina, two recent and popular representations of AI in cinema, both of which represent AI, and its characters’ interactions with AI, from the point of view of male psychology and desire.

As Gates points out:
“If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”

The entire interview is worth a read

Together, Gates and Li are launching a national non-profit called AI4ALL, aimed at increasing the diversity of voices behind AI, and getting people of color and women educated in a field where they are highly underrepresented.

Why it’s hot:
AI has the potential to redefine our future. Where is the diversity of minds necessary to make it a future for ALL?

Google’s New AI Tool is a Pandora’s Box of Possibilities

It’s a simple idea: turn a selfie into cartoon character stickers (emojis) of yourself. Google’s new Allo app, touted by the company as a “smart messaging app” that lets you “express yourself better” includes this new AI feature that uses your smartphone camera and facial recognition technology to generate detailed facial expressions to suit every emotion. According to Fast Co, Google thinks there are 563 quadrillion faces that the tool could generate.

“Illustrations let you bring emotional states in a way that selfies can’t.” Selfies are, by definition, idealizations of yourself. Emoji, by contrast, are distillations and exaggerations of how you feel. To that end, the emoji themselves are often hilarious: You can pick one of yourself as a slice of pizza, or a drooling zombie. “The goal isn’t accuracy,” explains Cornwell. “It’s to let someone create something that feels like themselves, to themselves.” 

Full article here

Why it’s hot:

  • It’s another layer on personalization in social media and messaging apps that Snapchat and Instagram will look to integrate. It could also mean the end of Bitmoji as we know it.
  • On a deeper level, there could be many applications outside of entertainment for this type of technology. If you can use AI to better express how you feel to a doctor or nurse, for example, a whole new world of communication could be opened up.
  • And going broader, there’s a big question: as messaging apps get smarter and smarter, do our interactions through them become more or less valuable? When AI is the go-between, are we better expressing ourselves, or is it a substitute for real interaction?

repeat after me…

A Canadian company called Lyrebird has created a way to replicate anyone’s voice using AI. After capturing 60 seconds of anyone talking, the machine can reproduce an individual’s way of speaking. They say they’ve already received thousands of ideas on how people could use this new capability:

Some companies, for example, are interested in letting their users choose to have audio books read in the voice of either famous people or family members. The same is true of medical companies, which could allow people with voice disabilities to train their synthetic voices to sound like themselves, if recorded samples of their speaking voices exist. Another interesting idea is for video game companies to offer the ability for in-game characters to speak with the voice of the human player.

 

But even bigger, they say their technology will allow people to create a unique voice of their own, with the ability to fully control even the emotion with which it speaks.

Why it’s hot

Besides the fact that it’s another example of life imitating art, we already live in a world where we have quite a bit of control over how we portray ourselves to the world. In the future, could we choose our own voice? Could we have different voices for every situation? How might we ever really be sure we know who we’re speaking to? Does the way someone has chosen to sound change the way we get to know them? And, what if the voices of our friends and family can now be preserved in perpetuity?

 

Now there’s a dating app in Slack




The dating app Feeld, previously known as 3nder and commonly known as “Tinder for threesomes,” has just announced a Slack integration – “Feeld for Slack

According to the Feeld, the bot works like this — just open a direct message conversation with Feeld and @-mention someone you “have feelings for.” For them to find out that you did that, they’ll have to initiate a conversation with the bot and mention you. Otherwise, Feeld promises, your secret dies in Slack. It doesn’t mention how long the bot will wait for your crush to reciprocate.

Why it’s hot?
What could possibly go wrong?

Spellfucker (alfah)

https://spellfucker.com/

Spellfucker screws up your spelling to outsmart and confuse web bots.

 

The goal of the project is to make text hard to read for computers yet fairly easy to read for humans (like bbboing, just differently).

For example, translate.google.com can not understand it.

WHY ITS HOT

Thice proadgecd ees eenteresting bekause eet creytece thexd thad jou end I kan wread, bud (kurentli) metchynese kan’t. eet whould maeke eet moure dyfykuled pher botz thoe exployed eenformation aboued jou, phour egzemple, thergetyngue edz beysed hon wuad jou thipe. An egzemple hoph poatential becklesh/toulce ageinsd AI.

 

IBM Watson’s New Job as Art Museum Guide

For the launch of IBM Watson in Brazil, Ogilvy Brazil created an interactive guide that lets people have conversations with work housed at the Pinacoteca de São Paulo Museum. “The Voice of Art” replaces pre-recorded audio guides with a Watson-powered program that gleans data from books, old newspapers, recent articles, biographies, interviews and the internet.

It took IBM six months to teach Watson how to make sense of all that content. Hosted on cloud platform IBM Bluemix, its AI capabilities were put to work answering spontaneous questions about art by renowned Brazilian creators like Cândido Portinari, Tarsila do Amaral and José Ferraz de Almeida Júnior.

Conversational scope can range from historical and technical facts (like “What technique was used to create this painting?”) to the piece’s relation to contemporary events.

The video below does a nice job of showing how Watson fields natural questions whose answers feel especially relevant to the person asking, creating a unique connection between viewer and piece. In one cool moment, a boy approaches Portinari’s O Mestiço, a 1934 painting of a shirtless mixed-race man against the backdrop of a coffee farm.

When Oglivy found out 72 percent of Brazilians had never been to a museum, they saw an opportunity to make use of Watson’s cognitive intelligence to make their visit very interactive. At the museum’s entrance, visitors receive headphones and a smartphone equipped with the mobile app. As they walk, the app tells them when they’re approaching an art piece they can ask questions about. A separate feature, for hearing-impaired visitors, lets them interact through a built-in written chat tool.

Source: AdWeek

Why it’s hot:

  • This could have a lot of implications for our brands in the future – IBM Watson acting as a tour guide or concierge in different environments could help bridge knowledge gaps for things that need extra explaining, or for consumers that prefer more hands-on experiences.

Computers that recognize hate speech

“Based on text posted on forums and social media, a new machine learning method has been developed to detect antisocial behaviours such as hate speech or indications of violence with high accuracy.”

Link to article

What this can be used for:

  • Identifying clusters and patterns of hateful speech on social media platforms
  • Preventing hate crimes (“In extreme cases, perpetrators of school shootings or other acts of terror post angry or boastful messages to niche forums before they act.”)
  • Big brother

On a related note, remember this:
Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

better living partying through chemistry technology…


[about 2:05-2:45 should do it]

It’s not just a clever name, PartyBOT is your “Artificial Dance Assistant”, or ADA for short. Debuted at SxSW last month, ADA learns about party-goers musical tastes, drink preferences, and social savvy. Then, it uses facial and voice recognition to monitor the room, playing tunes tailored to the interests of those who aren’t partying hard enough as determined by their expressions and conversations. As described by its creators…

“The users’ relationship with the bot begins on a mobile application, where—through a facial recognition activity—the bot will learn to recognize the user and their emotions. Then, the bot will converse with the user about party staples—music, dancing, drinking and socializing—to learn about them and, most importantly, gauge their party potential. (Are they going to be a dance machine or a stick in the mud—the bot, as a bouncer of sorts, is here to find out.)

Upon arrival at the bar, the user will be recognized by PartyBOT, and throughout the party, the bot will work to ensure a personalized experience based on what it knows about them—their favorite music, beverages, and more. (For example, they might receive a notification when the DJ is playing one of their favorite songs.)”

Why it’s hot:

Obviously this was an intentionally lighthearted demonstration of using bot and other technology to improve an experience for people. Apart from knowing you like Bay Breezes, imagine the improved relationship brands could have with their customers by gathering initial preferences and using those to tailor experiences to each individual. Many times bots are thought of in very simplistic – question/answer/solution form – but this shows how combining AI with other emerging technologies can make for a much more personally exciting overall experience.

Ada: Your Virtual… Doctor?

Ada, a London and Berlin-based health tech startup, sees its official U.K. push this week, and in doing so joins a number of other European startups attempting to market something similar to an AI-powered ‘doctor’. This app has been six years in the making, and originally started out start off as a tool to help doctors avoid misdiagnosis, but now it is a “personal health companion and telemedicine app”. Via a conversational interface, Ada is designed to help you work out what symptoms you have and offer you information on what might be the cause. If needed, it then offers you a follow up remote consultation with a real doctor over text.

The app works by plugging in the symptoms of something, going through quite an extensive set of questions, many of which relate to the answers you have previously given. The Ada app provides various possible conditions, and advises on next steps (treat at home vs. seek further guidance from a professional).

The app aims to empower patients to make more informed decisions about their health. Or, to out it more bluntly, to ensure we only visit a doctor when we need to and, more generally, can be proactive in our healthcare without adding the need for greater human doctor resources.

“Ada has been trained over several years using real world cases, and the platform is powered by a sophisticated artificial intelligence (AI) engine combined with an extensive medical knowledge base covering many thousands of conditions, symptoms and findings,” explains the company. “In every assessment, Ada takes all of a patient’s information into consideration, including past medical history, symptoms, risk factors and more. Through machine learning and multiple closed feedback loops, Ada continues to grow more intelligent, putting Ada ahead of anyone else in the market”.

Ada isn’t claiming to replace your doctor anytime soon. Like a lot of AI being applied to various verticals, not just healthcare, the app is designed to augment the role of humans, not replace it altogether. This can happen in two ways:

  1. Helping to act as a prescreen consultation before, if needed, being handed off to a real doctor for further advice, or simply helping to create a digital paper trail before a consultation takes place.
  2. By getting some of the most obvious symptom-related questions out of the way and captured and analysed by the app, it saves significant time during any follow up consultation.

App feedback has already shown it to successfully diagnose both common and quite rare conditions. Ada’s AI, since it has and continues to be trained by real doctors, pools a lot of shared expertise.

Video on how it works

WHY IT’S HOT:

Ada is another example of how AI is continually evolving, especially in the healthcare landscape. It’s certainly a good thing that this app in particular is not promising to replace doctors, but crowdsource information to make doctor’s appointments more efficient.

On the other side of this, I am sure doctors are not thrilled about patients coming in with a self-diagnosis – which can undermine the doctor’s job and derail and appointment all together.

Source: TechCrunch

Today, To die. Google home can recognize multiple voices

Google Home can now be trained to identify the different voices of people you live with. Today Google announced that its smart speaker can support up to six different accounts on the same device. The addition of multi-user support means that Google Home will now tailor its answers for each person and know which account to pull data from based on their voice. No more hearing someone else’s calendar appointments.

So how does it work? When you connect your account on a Google Home, we ask you to say the phrases “Ok Google” and “Hey Google” two times each. Those phrases are then analyzed by a neural network, which can detect certain characteristics of a person’s voice. From that point on, any time you say “Ok Google” or “Hey Google” to your Google Home, the neural network will compare the sound of your voice to its previous analysis so it can understand if it’s you speaking or not. This comparison takes place only on your device, in a matter of milliseconds.

Why it’s hot?
-Everyone in the family gets a personal assistant.
-Imagine how it might work in a small business / office
-Once it starts recognizing more than six voices, can every department have its own AI assistant?

Drug Discovery With Deep Learning

Pharmaceutical companies are often at a crossroads when it comes to the development of new drugs. The current FDA process not only takes an enormous amount of resources, but also time. These processes put a large burden of risk on Pharma and could be a deterrent to future innovation.

Why Does It Take So Long?

The typical drug can take upwards of 18 years to hit the market, then (assuming all is approved) companies only have a relatively short amount of time to sell the drug under patent protection.

A large amount of time is spent just identifying which compound might be a solution for the problem at hand. Thousands of tests occur even before clinical trials are conducted.

How Can We Make This Better?

What if the sciences could get a jump start on those 1000’s of possible compounds? That’s just what Atomwise has done. Partnering with IBM they have leveraged their powerful AI data platform to create, model and test compounds and a molecular level.  This allows the team to rapidly test compounds and compute score around the likeness of effect. In addition, Atomwise can use current compounds to find treatments for other diseases like Ebola.

Why Its Hot

 Using the latest in powerful processing will allow more advancement than ever before. As computing power continues to increase, so does the opportunity to solve complex problems in the most efficient manner. This not only leads to more opportunities for development, but reduced time to market.

 

WeChat Wallet On the Rise in China

In the U.S., you may pay for your coffee from the Starbucks app, book a car on Uber and place orders with the Amazon shopping app. But in China, you can do all of these through WeChat Wallet alone.

WeChat Wallet functions like Apple Pay, which allows users to purchase products and services with select credit or debit cards (mostly from Asian banks; Chase and Bank of America are not available, for instance.) Unlike Apple Pay, WeChat Wallet has integrated its parent company Tencent-owned services to let its 800 million-plus monthly users do things like pay for utilities and manage personal finance, within WeChat. It has also partnered with a limited number of third-party companies to make them more discoverable on the platform.

WeChat Wallet interface Customize your Starbucks coffee gift.

Why It’s Hot

WeChat is an incredibly popular 1:1 networking platform that successful global brands have recognized as a key platform for reaching the Chinese market. This, coupled with tech like WeChat Wallet and the growing functionalities of ChatBots thanks to sophisticated AI, lays the ground work for countless opportunities for brands to monetize their 1:1 efforts in key markets.

Starbucks – a long time early tech adopting brand – is already getting in on the opportunity for WeChat Wallet in this market.

For some of our global brands where China is a key market, like ETS, this is definitely a trend to watch.

Cathay Pacific Airways’ Artmap Project

Cathay Pacific Airways is emailing personalized paintings as birthday gifts to its loyalty club members. Members can share their painting digitally or print a high-resolution copy.

The art piece is made by an algorithmic tool specially designed to create tailored digital paintings using each member’s travel data and flight trajectories.

Why Its Hot

The brief was for a member’s birthday greeting to drive increased loyalty amongst Marco Polo loyalty club members. But the brand understands that consumers are not loyal to programs or points: they are loyal to experiences.

Cathay Pacific is genuinely about meaningful experiences, treating travel with respect, understated elegance and being there when people need it and not when they don’t.This experience is rewarding, inspiring, and personal.

 

Fake News Challenge: Using AI To Crush Fake News

The Fake News Challenge is a grassroots competition of over 100 volunteers and 71 teams from academia and industry, to find solutions to the problem of fake news.

The competition is designed to foster the development of new tools to help human fact checkers identify real news from fake news using machine learning, natural langauge processing, and AI.

 

http://www.fakenewschallenge.org/

 Why its hot:

  • When everyone can create content anywhere, its important that truth be validated and for misinformation to be identified.
  • This is an immensely important and complex task executed as a global hackathon spread over 6 months. Big challenges can be approached in new ways.
  • This challenge will result in new tools that could make its way into our publishing platforms, our social networks, etc – is this potentially good or bad for us?

 

Explainable AI

We teach machines to think like humans. We ask them to solve complex tasks that increase in difficulty and the machine iterates and learns to handle new obstacles thrown its way. But how can humans understand how machines come up with specific solutions? What can we learn from the machines and their rationale process?

Explainable AI, an emerging field in AI research will help answer those questions. Through explainable AI, we will be able to understand a machine’s rationale, characterize their strengths and weaknesses in the decision-making process and have a better understanding of how they will behave in the future.

“Consider the use of AI-powered machines to help Wall Street firms trade stocks and other financial instruments. What if automated trading systems start building a massive position in a stock, against everything that the market appears to be predicting? If you were the head of the equity trading team, you’d expect those machines to be able to explain how they came to that decision. Maybe they discovered a market inefficiency that nobody has noticed yet, or maybe they are getting better at anticipating the moves of other rival Wall Street firms. But when millions of dollars are potentially at stake, you want to make sure that a bunch of machines are trading your money wisely.”

Why it’s hot:

  • Understanding how a machine “thinks” helps research teams check and debug machines over time so that they can anticipate how it will act in the future
  • XAI (explainable AI) brings us one step closer to making machines accountable for their actions, just like humans are (self-driving cars)
  • XAI could potentially help humans identify inefficiencies or understand complexities that were once a mystery

Google’s AI Bot Wins Again!

Proving that it’s no fluke, Google’s artificial intelligence program AlphaGo won its second game of Go yesterday by beating the #2 human Go champion in the world. Go is described as the world’s most complicated game and it was thought that humans would still prevail when matched against AlphaGo.

As reported in cnet.com, world champion Lee Sedol said, “Yesterday I was surprised (at losing) but today it’s more than that, I am quite speechless.” Two wins in a row was virtually unthinkable. The match is being held in South Korea as part of the Google DeepMinds Challenge. Millions of people around the world are watching as part of a live stream of this five-game competition.

“To put it in context, it’s a game for people who think chess is too easy. The victory has also come as a surprise to everyone, as it wasn’t thought that artificial intelligence, the science of computers that more closely mimic human smarts, was ready to take on humans at Go just yet. It’s a sign that AlphaGo is smarter than we thought.”

Why It’s Hot

AI was not thought to have advanced to the level of winning Go, the world’s most complex game. But it’s now done so twice. Time to worry about AI vs the human brain? According to Mark Zuckerberg, we have nothing to fear. As mentioned in the cnet.com article, he pointed out that we’re “nowhere near understanding how intelligence actually works,” never mind replicating and beating it.

Soothing robot in the doctor’s office

Going to the doctor can be a scary trip for children.  But a robot named MEDI can make the visit a little bit easier and less frightening.  Short for Medicine and Engineering Designing Intelligence, MEDI stays with the child through medical procedures, talking to them in one of 20 languages and offering soothing advice to get them through the visit.

Equipped with multiple cameras, facial recognition technology and the ability to speak directly to the little patients, MEDI is the product of Tanya Beran, a professor of community health sciences at the University of Calgary in Alberta.  Her team began developing MEDI three years ago and conducted a study of 57 children.  According to Yahoo Tech, “Each was randomly assigned a vaccination session with a nurse, who used the same standard procedures to dispense the medication. In some of those sessions, MEDi used cognitive-behavioral strategies to assuage the children as they got the shot. Afterward, children, parents, and nurses filled out surveys to estimate the pain and distress of the whole shebang.”

The result was that the kids who had MEDI by their side during the procedure reported less pain. Since that study, MEDI is being programmed for more serious procedures, such as chemotherapy to blood transfers to surgery.

Why it’s hot

Robotic technology is starting to come together with practical applications for people.  With motion, voice, the ability to recognize humans and interact with logical language patterns, MEDI is a natural step along the way to fully interactive robots, possibly even artificial intelligence.