The AI Drone Crocodile Hunter

Last summer, Australia began testing drones at their beaches to help spot distressed swimmers – acting as overhead lifeguards. Now the same company that created that technology, Ripper Group, is creating an algorithm for their drones to spot crocodiles.

While not frequent, crocodile attacks have gone up in recent years. And crocodiles are not easily identified when they spend up to 45 minutes under murky water. So the Ripper Group is using machine learning to train drones to distinguish crocodiles from 16 other marine animals, boats, and humans through a large database of images.

The drones also include warning sirens and flotation devices for up to four people, to assist in emergency rescue when danger is spotted.

Why It’s Hot

Lifeguards are limited in what they can see and how quickly they can act. With the assistance of drones, beach goers can stay carefree.

Source

Retail wants a Minority Report for returns

In what now seems inevitable, an online fashion retailer in India owned by an e-commerce startup that’s backed by Walmart is doing research with Deep Neural Networks to predict which items a buyer will return before they buy the item.

With this knowledge, they’ll be better able to predict their returns costs, but more interestingly, they’ll be able to incentivize shoppers to NOT return as much, using both loss and gain offers related to items in one’s cart.

The nuts and bolts of it is: the AI will assign a score to you based on what it determines your risk of returning a specific item to be.This data could be from your returns history, as well as less obvious data points, such as your search/shopping patterns elsewhere online, your credit score, and predictions about your size and fit based on aggregated data on other people.

Then it will treat you differently based on that assessment. If you’re put in a high risk category, you may pay more for shipping, or you may be offered a discount in order to accept a no-returns policy tailored just for you. It’s like car insurance for those under 25, but on hyper-drive. If you fit a certain demo, you may start paying more for everything.

Preliminary tests have shown promise in reducing return rates.

So many questions:

Is this a good idea from a brand perspective? If this becomes a trend, will retailers with cheap capital that can afford high-returns volume smear this practice as a way to gain market share?

Will this drive more people to better protect their data and “hide” themselves online? We might be OK with being fed targeted ads based on our data, but what happens when your data footprint and demo makes that jacket you wanted cost more?

Will this encourage more people to shop at brick and mortar stores to sidestep retail’s big brother? Or will brick and mortar stores find a way to follow suit?

How much might this information flow back up the supply chain, to product design, even?

Why it’s hot

Returns are expensive for retailers. They’re also bad for the environment, as many returns are just sent to the landfill, not to mention the carbon emissions from sending it back.

So, many retailers are scrambling to find the balance between reducing friction in the buying process by offering easy returns, on the one hand, and reducing the amount of actual returns, on the other.

There’s been talk of Amazon using predictive models to ship you stuff without you ever “buying” it. You return what you don’t want and it eventually learns what you want to the point where you just receive a box of stuff at intervals, and money is extracted from your bank account. This also might reduce fossil fuels.

How precise can these predictive models get? And how might people be able to thwart them? Is there a non-dystopian way to reduce returns?

Source: ZDNet

Going Paperless in a Brick and Mortar

Lush is known for its colorful soaps and bath bombs, but the brand has consistently prioritized going green above all else—and its very first SXSW activation was no exception.

The brand set up its bath bomb pop-up to showcase its 54 new bath bomb creations using absolutely no signage. Instead, attendees could download the Lush Labs app, which uses AI and machine learning to determine what each bath bomb is with just a quick snapshot. “At Lush, we care about sustainability, and we wanted to take that same lens … and apply it to the way we are using technology,” Charlotte Nisbet, global concept lead at Lush, told Adweek.

Nisbet explained that three decades ago, Lush co-founder Mo Constantine invented the bath bomb when brainstorming a packaging-free alternative to bubble bath. (The new bath bombs are being released globally on March 29 in celebration of 30 years since Constantine created the first bath bomb in her garden shed in England.)

“But we were still facing the barrier to being even more environmentally friendly with packaging and signage in our shops,” Nisbet said.

Enter the Lush Lens feature on the Lush Labs app, which lets consumers scan a product with their phone to see all the key information they’d need before making a purchase: price, ingredients and even videos of what the bath bomb looks like when submerged in water. “This means that not only can we avoid printing signage that will eventually need to be replaced, but also that customers can get information on their products anytime while at home,” Nisbet said.

Why It’s Hot

The application sounds cool but is this a sustainable direction for more stores to take? As brick and mortar stores continue to struggle, we could see many start to experiment with ways to bring digital experiences to consumers already plugged into their smartphones in retail spaces.

Source: Adweek

Google Flights will now predict airline delays – before the airlines do

Google is rolling out a few new features to its Google Flights search engine to help travelers tackle some of the more frustrating aspects of air travel – delays and the complexities of the cheaper, Basic Economy fares. Google Flights will take advantage of its understanding of historical data and its machine learning algorithms to predict delays that haven’t yet been flagged by airlines themselves.

Explains Google, the combination of data and A.I. technologies means it can predict some delays in advance of any sort of official confirmation. Google says that it won’t actually flag these in the app until it’s at least 80 percent confident in the prediction, though.

It will also provide reasons for the delays, like weather or an aircraft arriving late.

You can track the status of your flight by searching for your flight number or the airline and flight route, notes Google. The delay information will then appear in the search results.

The other new feature added aims to help travelers make sense of what Basic Economy fares include and exclude with their ticket price.Google Flights will now display the restrictions associated with these fares – like restrictions on using overhead space or the ability to select a seat, as well as the fare’s additional baggage fees. It’s initially doing so for American, Delta and United flights worldwide.

Source: TechCrunch

Why It’s Hot

Great example of using AI and predictive methods to drive better customer experience, and combat an industry that is less-than-transparent usually. It makes Google’s search solutions more desired and solidifies it as THE place to search everything. Would like to see if the alerts could get actionable, though, as right now they are more anxiety-creators.

 

Machine learning as film critic

While identifying a Wes Anderson movie is probably something many moviegoers could do without complex AI, the creator of a new machine learning program called Machine Visions is hoping he can learn more about what makes an auteur’s works distinct.

[Yannick] Assogba uses four of Anderson’s films as source for his project — The Life AquaticThe Royal Tenenbaums, Fantastic Mr. Fox, and Moonrise Kingdom — from which he extracts a frame every 10 seconds, for a sample of 2,309 frames in total.

Assogba investigates color and recurring motifs in Anderson’s works, drawing out themes from the machine learning much faster than a human would be able to watch and process the images.

The Life Aquatic pixel grid

Each frame that the program analyzed from The Life Aquatic is displayed as a single pixel in this grid

Why It’s Hot

Machine visions not only provides an interesting way to look at film and cinematography through the lens of technology, it provides a detailed and accessible framework for starting to understand machine learning. By introducing people to machine learning through art and pop culture, Assogba gives both technical and non-technical people a reason to explore further.

“It can suggest similarities and juxtapositions for a human to look at, some are ones we would find ourselves while others might be surprising or poetic because of imperfections in the algorithms and models.”

Learn more  i-DMashable | Machine Visions

How does your garden grow?

Deere & Company has signed an agreement to acquire Blue River Technology, a leader in applying machine learning to agriculture.

Deere

Blue River has designed and integrated computer vision and machine learning technology that will enable growers to reduce the use of herbicides by spraying only where weeds are present, optimizing the use of inputs in farming – a key objective of precision agriculture.

“Blue River is advancing precision agriculture by moving farm management decisions from the field level to the plant level,” said Jorge Heraud, co-founder and CEO of Blue River Technology. “We are using computer vision, robotics, and machine learning to help smart machines detect, identify, and make management decisions about every single plant in the field.”

More on PR Newswire.

Why It’s Hot
Industrial agriculture is having a profound effect on our planet, from cheaper food to global warming. Perhaps AI can help us better control our impact (fingers crossed).

Your Instagram Posts May Hold Clues to Your Mental Health

The photos you share online speak volumes. They can serve as a form of self-expression or a record of travel. They can reflect your style and your quirks. But they might convey even more than you realize: The photos you share may hold clues to your mental health, new research suggests.

From the colors and faces in their photos to the enhancements they make before posting them, Instagram users with a history of depression seem to present the world differently from their peers, according to the study, published this week in the journal EPJ Data Science.

“People in our sample who were depressed tended to post photos that, on a pixel-by-pixel basis, were bluer, darker and grayer on average than healthy people,” said Andrew Reece, a postdoctoral researcher at Harvard University and co-author of the study with Christopher Danforth, a professor at the University of Vermont.

The pair identified participants as “depressed” or “healthy” based on whether they reported having received a clinical diagnosis of depression in the past. They then used machine-learning tools to find patterns in the photos and to create a model predicting depression by the posts.

They found that depressed participants used fewer Instagram filters, those which allow users to digitally alter a photo’s brightness and coloring before it is posted. When these users did add a filter, they tended to choose “Inkwell,” which drains a photo of its color, making it black-and-white. The healthier users tended to prefer “Valencia,” which lightens a photo’s tint.

Depressed participants were more likely to post photos containing a face. But when healthier participants did post photos with faces, theirs tended to feature more of them, on average.

The researchers used software to analyze each photo’s hue, color saturation and brightness, as well as the number of faces it contained. They also collected information about the number of posts per user and the number of comments and likes on each post.

Though they warned that their findings may not apply to all Instagram users, Mr. Reece and Mr. Danforth argued that the results suggest that a similar machine-learning model could someday prove useful in conducting or augmenting mental health screenings.

“We reveal a great deal about our behavior with our activities,” Mr. Danforth said, “and we’re a lot more predictable than we’d like to think.”

Source: New York Times

Why It’s Hot

The link between photos and health is an interesting one to explore. The role of new/alternate technologies (or just creative ways of using existing ones) in identifying illness — whether mental or otherwise — is something we are sure to see more of.

Start brushing off your resume…

The Mirai is Toyota’s car of the future. It runs on hydrogen fuel cells, gets 312 miles on a full tank and only emits water vapor. So, to target tech and science enthusiasts, the brand is running thousands of ads with messaging crafted based on their interests.

The catch? The campaign was written by IBM’s supercomputer, Watson. After spending two to three months training the AI to piece together coherent sentences and phrases, Saatchi LA began rolling out a campaign last week on Facebook called “Thousands of Ways to Say Yes” that pitches the car through short video clips.

Saatchi LA wrote 50 scripts based on location, behavioral insights and occupation data that explained the car’s features to set up a structure for the campaign. The scripts were then used to train Watson so it could whip up thousands of pieces of copy that sounded like they were written by humans.

http://www.adweek.com/digital/saatchi-la-trained-ibm-watson-to-write-thousands-of-ads-for-toyota/

Why It’s Hot
May let us focus more on the design; less on the production.

Computers that recognize hate speech

“Based on text posted on forums and social media, a new machine learning method has been developed to detect antisocial behaviours such as hate speech or indications of violence with high accuracy.”

Link to article

What this can be used for:

  • Identifying clusters and patterns of hateful speech on social media platforms
  • Preventing hate crimes (“In extreme cases, perpetrators of school shootings or other acts of terror post angry or boastful messages to niche forums before they act.”)
  • Big brother

On a related note, remember this:
Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day

Google’s Newest Machine Learning A.I. experiment: AutoDraw

“Drawing on your phone or computer can be slow and difficult—so we created AutoDraw, a new web-based tool that pairs machine learning with drawings created by talented artists to help you draw.”

Using machine learning, Google’s new A.I. experiment can predict what you are trying to draw. It’s like auto-complete for drawing icons.

 

Give it a try here: https://autodraw.com

Another A.I. Google experiment that uses the same tech but in more of a gamified way: https://quickdraw.withgoogle.com/

Why it’s hot: Our designers will soon be replaced by robots.

 


TechDay NYC:

ALSO! TechDay, an expo of 600-ish disruptive NYC startups, is on Tuesday at Pier 84 from 10am to 5pm. It’s free but you have to register on the site first! A few of us will be going be going around 3pm if you want to join then or go on your own earlier in the day. It’ll be a good source for some local hotsauce 😉

 

https://techdayhq.com/new-york

Your Kid’s Computer Has Dinner Covered.

Neural networks are computer learning algorithms that mimic the interconnected neurons of a living brain, managing astonishing feats of image classification, speech recognition, or music generation by forming connections between simulated neurons.

I’m not a neural network expert, so I had to look that one up when I heard that there was a grad student who loaded a neural network code on her 2010 MacBook Pro, and started training it on a bunch of recipes and cocktails.

Here’s a few recipes the network has generated:

Pears Or To Garnestmeam

meats

¼ lb bones or fresh bread; optional
½ cup flour
1 teaspoon vinegar
¼ teaspoon lime juice
2  eggs

Brown salmon in oil. Add creamed meat and another deep mixture.

Discard filets. Discard head and turn into a nonstick spice. Pour 4 eggs onto clean a thin fat to sink halves.

Brush each with roast and refrigerate.  Lay tart in deep baking dish in chipec sweet body; cut oof with crosswise and onions.  Remove peas and place in a 4-dgg serving. Cover lightly with plastic wrap.  Chill in refrigerator until casseroles are tender and ridges done.  Serve immediately in sugar may be added 2 handles overginger or with boiling water until very cracker pudding is hot.

Yield: 4 servings

This is from a network that’s been trained for a relatively long time – starting from a complete unawareness of whether it’s looking at prose or code, English or Spanish, etc, it’s already got a lot of the vocabulary and structure worked out.

This is particularly impressive given that it has the memory of a goldfish – it can only analyze 65 characters at a time, so by the time it begins the instructions, the recipe title has already passed out of its memory, and it has to guess what it’s making. It knows, though, to start by browning meat, to cover with plastic wrap before chilling in the refrigerator, and to finish by serving the dish.

Compare that to a recipe generated by a much earlier version of the network:

Immediately Cares, Heavy Mim

upe, chips

3  dill loasted substetcant
1  cubed chopped  whipped cream
3  unpreased, stock; prepared; in season
1  oil
3 cup milk
1 ½ cup mOyzanel chopped
½ teaspoon lemon juice
1 ¼ teaspoon chili powder
2 tablespoon dijon stem – minced
30  dates afrester beater remaining

Bake until juice. Brush from the potato sauce: Lightly butter into the viscin. Cook combine water. Source: 0 25 seconds; transfer a madiun in orenge cinnamon with electres if the based, make drained off tala whili; or chicken to well. Sprinkle over skin greased with a boiling bowl.  Toast the bread spritkries.

Yield: 6 servings

which bakes first, has the source in the middle of the recipe directions, mixes sweet and savory, and doesn’t yet know that you can’t cube or chop whipped cream.

An even earlier version of the network hasn’t yet figured out how long an ingredients list should be; it just generates ingredients for pages and pages:

Tued Bick Car

apies

2 1/5 cup tomato whene intte
1 cup with (17 g cas pans or
½ cup simmer powder in patsorwe ½ tablespoon chansed in
1 ½ cup nunabes baste flour fite (115 leclic
2 tablespown bread to
¼ cup 12″. oz mice
1  egg barte, chopped shrild end
2 cup olasto hote
¼ cup fite saucepon; peppen; cut defold
12 cup mestsentoly speeded boilly,, ( Hone
1  Live breseed
1  22 ozcugarlic
1 cup from woth a soup
4 teaspoon vinegar
2 9/2 tablespoon pepper garlic
2 tablespoon deatt

And here’s where it started out after only a few tens of iterations:

ooi eb d1ec Nahelrs  egv eael
ns   hi  es itmyer
aceneyom aelse aatrol a
ho i nr  do base
e2
o cm raipre l1o/r Sp degeedB
twis  e ee s vh nean  ios  iwr vp  e
sase
pt e
i2h8
ePst   e na drea d epaesop
ee4seea .n anlp
o s1c1p  ,  e   tlsd
4upeehe
lwcc   eeta  p ri  bgl as eumilrt

Even this shows some progress compared to the random ASCII characters it started with – it’s already figured out that lower case letters predominate, and that there are lots of line breaks. Pretty impressive!

Why It’s Hot:
Progress, progress, progress. Sometimes we take for granted how long and arduous the road to further our convenience is, or how well-equipped technology actually gets us from point A to B. We don’t always need to look under that hood, but we should be happy someone does, and technology such as machine learning neural networks continue to evolve to make our lives easier-or more entertaining until they get something right.  As the ability to learn from the tons of content mankind has already created continues to improve, there really is some scary (don’t)DIY frontiers on the horizon. Forget about wondering if your kid lifted their essay content from an online wiki source, worry instead if he loaded a code, taught his Mac to ingest thousands of volumes of American history and spit out a dissertation on the significance of the Lincoln-Douglass debates without penning a word. Then don’t punish that kid, get him a job making me new cocktails.
Click here if you want to see the cocktails it created:

Google Training Ad Placement Computers to Be Offended

After seeing ads from Coca-Cola, Procter & Gamble and Wal-Mart appear next to racist, anti-Semitic or terrorist videos, its engineers realized their computer models had a blind spot: They did not understand context.

Now teaching computers to understand what humans can readily grasp may be the key to calming fears among big-spending advertisers that their ads have been appearing alongside videos from extremist groups and other offensive messages.

Google engineers, product managers and policy wonks are trying to train computers to grasp the nuances of what makes certain videos objectionable. Advertisers may tolerate use of a racial epithet in a hip-hop video, for example, but may be horrified to see it used in a video from a racist skinhead group.

That ads bought by well-known companies can occasionally appear next to offensive videos has long been considered a nuisance to YouTube’s business. But the issue has gained urgency in recent weeks, as The Times of London and other outlets have written about brands that inadvertently fund extremists through automated advertising — a byproduct of a system in which YouTube shares a portion of ad sales with the creators of the content those ads appear against.

This glitch in the company’s giant, automated process turned into a public-relations nightmare. Companies like AT&T and Johnson & Johnson said they would pull their ads from YouTube, as well as Google’s display advertising business, until they could get assurances that such placement would not happen again.

“We take this as seriously as we’ve ever taken a problem,” Philipp Schindler, Google’s chief business officer, said in an interview last week. “We’ve been in emergency mode.”

Over the last two weeks, Google has changed what types of videos can carry advertising, barring ads from appearing with hate speech or discriminatory content.

It is also putting in more stringent safety standards by default, so an advertiser must choose to place ads next to more provocative content. Google created an expedited way to alert it when ads appear next to offensive content.

Google’s efforts are being noticed. Johnson & Johnson, for example, said it had resumed YouTube advertising in a number of countries. Google said other companies were starting to return.

To train the computers, Google is applying machine-learning techniques — the underlying technology for many of its biggest breakthroughs, like the self-driving car. It has also brought in large human teams (it declined to say how big) to review the appropriateness of videos that computers flagged as questionable.

Essentially, they are training computers to recognize footage of a woman in a sports bra and leggings doing yoga poses in an exercise video safe for advertising and not sexually suggestive content. Similarly, they will mark video of a Hollywood action star waving a gun as acceptable to some advertisers, while flagging a similar image involving an Islamic State gunman as inappropriate.

Armed with human-verified examples of what is safe and what is not, Google’s computer systems break down the images of a YouTube video frame by frame, analyzing every image. They also digest what is being said, the video’s description from the creator and other signals to detect patterns and identify subtle cues for what makes a video inappropriate.

The idea is for machines to eventually make the tough calls. In the instances when brands feel that Google failed to flag an inappropriate video, that example is fed back into the system so it improves over time. Google said it had already flagged five times as many videos as inappropriate for advertising, although it declined to provide absolute numbers on how many videos that entailed.

Source: NYT

Decoding Human Emotions With Machine Learning

Day 3 of Social Media Week NYC brought some great thinkers and doers together to discuss the impact of social listening and intelligence, as well as the broader idea of quantifying and contextualizing human emotion that’s expressed in text, pictures and emoji’s (to name a few) on the internet.

Microsoft’s Emotioin API

EmojiSentiment.com

Why It’s Hot

In our continuously evolving digital world, I think there are two assumptions we can make about the future: technology will continue to get better and people will continue to use and shape it. As digital marketers, the more tools and approaches we have towards understanding human behaviors in using this technology – and more importantly the emotions or motivations behind those behaviors – the better suited we will be in creating experiences that add value to both their lives and our brand’s bottom line. These new resources (while still new and working out the kinks) are a great indication of what’s to come to help us do so.

 

Fake News Challenge: Using AI To Crush Fake News

The Fake News Challenge is a grassroots competition of over 100 volunteers and 71 teams from academia and industry, to find solutions to the problem of fake news.

The competition is designed to foster the development of new tools to help human fact checkers identify real news from fake news using machine learning, natural langauge processing, and AI.

 

http://www.fakenewschallenge.org/

 Why its hot:

  • When everyone can create content anywhere, its important that truth be validated and for misinformation to be identified.
  • This is an immensely important and complex task executed as a global hackathon spread over 6 months. Big challenges can be approached in new ways.
  • This challenge will result in new tools that could make its way into our publishing platforms, our social networks, etc – is this potentially good or bad for us?

 

Wearable Sensor Technology Helps Keep Students Safe on College Campuses

Whether you are on a college campus or on the streets of NYC, when walking home alone after a night out we must all take extra safety precautions to keep ourselves out of harms way. It has been shown recently that sexual violence on campus has reached all-time high levels, so in response to this epidemic, a new app called MrGabriel has been developed bringing safety to its users by using wearable sensor technology with machine learning and real-time data.

When wearing or using the Apple Watch or iPhone, MrGabriel monitors all sudden moves or change of pace that could be interpreted as signs of danger. If these seem irregular, the app sends a user message asking if they are OK and if they do not respond or dismiss the message, that triggers an alert in the form of an SMS to the three of the users chosen friends or family members called “angels.” The SMS (chosen because it has the strongest signal instead of internet connection) relays the message that their friend needs help, and provides them with their exact location, time of the alert and the user’s phone number to call immediately. The location is updated every 10 seconds or every yard until the user cancels the alert from the device. Instead of calling 911 immediately, friends/family were chosen to be the point of contact in case there are errors or accidental dismissals of the confirmation screen.

 

mr-gabriel-angels mr-gabriel-apple-watch

 

Why It’s Hot

There have been other apps similar to this,like Guardian and Stiletto, but they all focus on manual activation which isn’t as effective when someone is in real danger. MrGabriel focuses on sensors and artificial intelligence to determine changes in behavior that trigger an alert as opposed to manual activation, making it much more practical and useful. With the technology we have it is important that we do everything we can do keep ourselves, family and friends safe. I think this could help keep people more safe and prevent more tragedies from occurring.

Read more about MrGabriel here.