According to computer scientists at Stanford, they have “developed the first system for automatically synthesizing sounds to accompany physics-based computer animations” that “simulates sound from first physical principles” and most impressively, unlike other AI “no training data is required”.
Why it’s hot:
While most AI to date requires overt training in order to be able to properly synthesize an output, this requires none. It’s not the first AI to require no human-assistance, but the future that might have seemed years off for AI is rapidly advancing. If AI can construct sound from visuals based on physical principles, you have to wonder how hard it might be to construct physical objects based on sound.
The next time you’re searching for a recipe on Allrecipes.com, you might see a cocktail pairing sponsored by Tito’s Handmade Vodka. The two brands have teamed up on a mixologist chatbot, Barkeep, to recommend drinks and walk people through preparation.
Barkeep is powered by natural language processing and a mixology database to suggest cocktails based on seasonality, popularity, and users’ preferences. The chatbot will be accessible by Facebook Messenger, as well as integrated within the Allrecipes search database.
Beyond recipes, the chatbot also features a catalog of on-demand alcohol delivery powered by Drizly.
The partnership is a natural fit — Allrecipes users are 20% more likely than the average U.S. adult to be frequent entertainers, and are more likely to have prepared a mixed drink in the past week. They are also 21% more likely than the general U.S. population to have consumed Tito’s Handmade Vodka in the last six months, according to comScore Fusion.
Why It’s Hot
39 million people use Allrecipes.com every month. This is a natural way to introduce cocktail pairings and alcohol delivery to a large, engaged audience.
Beijing welcomed its first unmanned smart bakery, a collaboration between Alibaba and domestic baker brand Wedomé. The bakery uses technologies including AI image recognition, mobile payment and QR code to enable unmanned services.
Why it’s hot: Mobile payment is so prominent in China and sets the nation on its way to be (maybe) a cashless economy one day.
New machine learning techniques are giving surveillance cameras the ability to capture suspicious behavior without the help of human supervision
A Japanese telecom company NTT East built AI Guardman, a new AI security cam with startup Earth Eyes Corp. They combined open source technology developed by Carnegie Mellon to scan video streams with their own algorithm that matches the data from these streams to ‘suspicious’ behavior. From early testing, NTT East claims AI Guardman reduced shoplifting in stores by roughly 40 percent.
But there are potential problems with this security camera. First, it sometimes misidentifies indecisive customers (who might pick up an item, put it back, and then pick it up again) and salesclerks who are restocking shelves as potential shoplifters. Second, it is possible that the data may be biased towards certain groups.
Why it’s Hot:
Currently, store owners may only know if they were shoplifted when it comes to their attention, which could be several hours after the fact. Once this technology is made available, they can be alerted of suspicious behavior in real time.
The Maze, a new choose-your-own-adventure game for Amazon’s Alexa, lets you play the role of a Westworld host, taking you through up to 60 storylines with 400 unique game choices, and up to 2 hours of storylines and narrative choices.
The Game features two of the show’s main actors–Jeffrey Wright, playing Bernard, and Angela Sarafyan, reprising her role as Clementine. Following on a simple chatbot, an AR and VR project, and a life-size replica of the show’s main town, Sweetwater, complete with actors fully in character, built for South by Southwest, the Alexa project, first conceived of in March and completed in just three months, is timed to the highly anticipated finale of season two (this past Sunday).
Those who have an Alexa-enabled device and download The Maze skill will start their adventure by saying, “Alexa, open Westworld” before venturing into the show’s world as a Westworld host–as the humanoid AIs in the show are called. They’ll be tasked with answering questions about the show, and trying to advance through three levels of increasing difficulty.
“Among the storylines users will explore are an encounter with a posse of bandits riding through Sweetwater; a ravenous family of homesteaders in Python Pass; a devious barback at Las Mudas, a run-in with a Confederado in Pariah, and more. Keeping in mind tips like showing the blackjack dealer at the saloon some respect and not pestering a traveler outside The Ranch about her experience at La Cantina in Las Mudas will help users do better.”
Why it’s hot: This is an engaging and shareable game that is right in line with the Westworld cult fanbase. I would be interested to see completion rates, as well as the % of Alexa owners who use it for games. However, regardless – this is a great way to engage this fan base!
Uber has applied for a patent to use AI to determine a passenger’s “user state” before they’re picked up by their driver. While this may trigger alarm for those who rely on Uber to get them home safely after a night of drinking, it seems as though the company has the passenger’s safety top of mind.
If implemented, the technology would scan for patterns in behaviors like interaction speed, typing, device angle and even walking speed to understand when a customer seems to be acting out of the ordinary. It will also measure how far from normal the behavior appears.
The company hasn’t clarified exactly what this will mean for users, but the patent application mentions that passengers may be paired with drivers “with experience or training with users having an unusual state.” It may also encourage drivers to use pickup and drop off locations that are well-lit and easy to find.
Why It’s Hot:
This unique application of AI can potentially make for a smoother ride for both Uber drivers and passengers. It may also inspire other apps to push the boundaries of how to improve customer experience based on user behavior data.
You’re a chess enthusiast, but let’s face it: your chess board is probably collecting dust in your closet. Since no one in your household wants to play, you’re forced to play a game online or, even worse, not at all. Don’t worry—InfiVention Technologies will solve your issue with artificial intelligence.
InfiVention Technologies is redefining board games with the help of AI. Their product Square Off lets you play a game of chess on a real board with real chess pieces against opponents online or the artificial intelligence of the board. You will see your opponent’s every move in real-time, right in front of your eyes. The board uses magnets to move the pieces, while careful to not dislodge the adjacent pieces from their positions.
Why it’s Hot:
No one expected AI to take over board games—it’s often associated with computers. Since board games are rarely single player, many games have transitioned online to allow you to play at your own convenience. This brings back the charm in playing chess.
Zillow plans to build AI into its search engine with the goal to transform the site from a real estate search engine to an assistant that understands what people want and are looking for. The idea is to learn and understand the types of criteria people are looking for and recommend homes based on that.
For example, the AI will be able to understand your taste in decor. It’ll be able to take into account the interior photos of homes people are looking at, understand what they might like and make recommendations based on that.
Why it’s hot (or not): There’s a chance that a home buyer might miss a house that has a lot of potential but does not meet the right criteria according to AI.
Duplex, Google’s robot assistant, now makes eerily lifelike phone calls for you.
The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.
During demonstrations, the virtual assistant did not identify itself and instead appeared to deceive the human at the end of the line. However, in the blogpost, the company indicated that might change.
“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”
Why It’s, Ummmm, Hot
Another entry in our ‘is it good, is it bad’ AI collection. Helpful if used ethically? Maybe. Scary if abused? Absolutely.
If you’re experiencing a panic attack in the middle of the day or want to vent or need to talk things out before going to sleep, you can connect with Tess the mental health chatbot through an instant-messaging app such as Facebook Messenger (or, if you don’t have an internet connection, just text a phone number).
Tess is the the brainchild of Michiel Rauws, the founder of X2 AI, an artificial-intelligence startup in Silicon Valley. The company’s mission is to use AI to provide affordable and on-demand mental health support.
A Canadian non-profit that primarily delivers health care to people in their own homes, Saint Elizabeth recently approved Tess as a part of its caregiver in the workplace program and will be offering the chatbot as a free service for staffers.
To provide caregivers with appropriate coping mechanisms, Tess first needed to learn about their emotional needs. In her month-long pilot with the facility, she exchanged over 12,000 text messages with 34 Saint Elizabeth employees. The personal support workers, nurses and therapists that helped train Tess would talk to her about what their week was like, if they lost a patient, what kind of things were troubling them at home – things you might tell your therapist. If Tess gave them a response that wasn’t helpful, they would tell her, and she would remember her mistake. Then her algorithm would correct itself to provide a better reply for next time.
On a fateful day in November of 1963, JFK never got to make his “Trade Mart” speech in Dallas. But thanks to the UK’s The Times and friends, we now have a glimpse at what that speech would’ve sounded like that day. Using the speech’s text and AI, The Times:
“Collected 831 analog recordings of the president’s previous speeches and interviews, removing noise and crosstalk through audio processing as well as using spectrum analysis tools to enhance the acoustic environment. Each audio file was then transferred into the AI system, where they used methods such as deep learning to understand the president’s unique tone and quirks expressed in his speech. In the end, the sound engineers took 116,777 sound units from the 831 clips to create the final audio.”
Why It’s Hot:
It seems we’re creating a world where anyone could be imitated scientifically. While in an instance like this, it’s great – to hear JFK’s words spoken, especially the sentiment in the clip above, was a joy for someone who cares about history and this country, especially given its current climate. But what if the likeness wasn’t recreated to deliver a speech written by him during his time, but rather something he never actually said or intended to say? Brings a whole new meaning to “fake news”.
Norman isn’t your typical AI who’s here for you to just ask random questions when you’re bored. Oh no, Norman here was created by researchers at MIT as an April Fools prank. At the beginning of its creation, it was exposed to “the darkest corners of Reddit” which resulted in the development of its psychopathic data processing tendencies. The MIT researchers define norman as;
“A psychotic AI suffering from chronic hallucinatory disorder; donated to science by the MIT Media Laboratory for the study of the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms”
Because of the neural network’s dark tendencies, the project’s website states that Norman is being “kept in an isolated server room, on a computer that has no access to the internet or communication channels to other devices.” As an additional security measure, the room also has weapons such as hammers, saws, and blow-torches in case there happens to be any kind of emergency or malfunction of the AI that would require it to be destroyed immediately.
Norman’s neural network is so far gone that researchers believe that ” “some of the encodings of the hallucinatory disorders reside in its hardware and there’s something fundamentally evil in Norman’s architecture that makes his re-training impossible.” Even after being exposed to neutral holograms of cute kittens and other fun and magical stuff, Norman essentially is so far gone that it’s just evil. While being presented with Rorschach inkblot images, Norman just went … well, let’s say in the comic universe, it’d be the ideal villain.
Why it’s hot:
We all know that AI is going to take over the world and that technology seems to be controlling us more than we’re controlling it but this almost perfectly depicts the dangers that could result in AI being developed using violence-fueled datasets.
The Brazilian edition of business magazine Forbes has created a provocative strategy to spotlight the issue of corruption, which is flourishing while the nation continues to struggle economically.
Working with Ogilvy Brazil, Forbes has personified the issue by creating a fictional character to represent the estimated $61bn that corruption costs the nation annually. The result is Ric Brasil, an AI-generated avatar whose aggregated ‘earnings’ from white collar crime would place him at number 8 in the upcoming Forbes 2018 billionaire list.
The features and persona of Ric Brasil have been developed by technology companies Nexo and Notan drawing on existing data and images held on convicted corporate criminals. Over the last eight months this material has been analysed along with information sourced from media reports, witness statements, interviews and books covering two of Brazil’s most infamous corruption cases.
According to the magazine’s CEO, Antonio Camarotti, ‘Forbes wants to take a stand against corruption. We thought of this campaign as a way not only to raise public awareness to the extent of the issue, but also to value honest business people—those who comply with their duties, pay taxes, and shun taxpayer’s money as a way to make a fortune. Someone who won’t let himself be lured into corruption practices.’
Members of the press will be able to interview Ric Brasil in the run up to the launch of the billionaires list on April 16.
Part of the problem with corporate crime is that while it has a cost, it’s often hard to find a way to channel public anger against what can feel like a victimless crime. By literally putting a face on an intangible, distributed crime – vividly ‘bringing the problem to life’ – Forbes has a better chance of getting people to connect with the issue.
In the latest episode of life imitating art is a Y Combinator startup whose proposition is essentially uploading your brain to the cloud. Per the source: “Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.”
Why It’s Hot:
What’s not hot is you have to die in order to do it, but what’s interesting is the idea of exploring our consciousness as almost iPhone storage. That reincarnation by technology could be possible.
Now to those who believe in prophecies this may seem like the end of the world. To be frank, a lot of people think that this may be a step too far … but it’s for science! Apparently someone at Swedish funeral agency, thought it would be brilliant if they can create an AI “replicate” of deceased loved ones so that families can have them back in their lives. They’re asking for donations (yes they’re asking for all the corpses) so that they can try to create a synthetic replica of the deceased’s voice.
Why it’s not hot:
Basically The world is going to end and we’re just going to be replaced by the AI replicas of the dead. Fun.
This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.
They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.
As explained via The Verge:
“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).
When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.“
Why It’s Hot:
This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.
Google is rolling out a few new features to its Google Flights search engine to help travelers tackle some of the more frustrating aspects of air travel – delays and the complexities of the cheaper, Basic Economy fares. Google Flights will take advantage of its understanding of historical data and its machine learning algorithms to predict delays that haven’t yet been flagged by airlines themselves.
Explains Google, the combination of data and A.I. technologies means it can predict some delays in advance of any sort of official confirmation. Google says that it won’t actually flag these in the app until it’s at least 80 percent confident in the prediction, though.
It will also provide reasons for the delays, like weather or an aircraft arriving late.
You can track the status of your flight by searching for your flight number or the airline and flight route, notes Google. The delay information will then appear in the search results.
The other new feature added aims to help travelers make sense of what Basic Economy fares include and exclude with their ticket price.Google Flights will now display the restrictions associated with these fares – like restrictions on using overhead space or the ability to select a seat, as well as the fare’s additional baggage fees. It’s initially doing so for American, Delta and United flights worldwide.
Great example of using AI and predictive methods to drive better customer experience, and combat an industry that is less-than-transparent usually. It makes Google’s search solutions more desired and solidifies it as THE place to search everything. Would like to see if the alerts could get actionable, though, as right now they are more anxiety-creators.
The next generation of powerful telescopes will scan millions of stars and generate massive amounts of data that astronomers will be tasked with analyzing. That’s way too much data for people to sift through and model themselves — so astronomers are turning to AI to help them do it.
How they’re using it:
1) Coordinate telescopes.The large telescopes that will survey the sky will be looking for transient events — new signals or sources that “go bump in the night,” says Los Alamos National Laboratory’s Tom Vestrand.
2) Analyze data.Every 30 minutes for two years, NASA’s new Transiting Exoplanet Survey Satellite will send back full frame photos of almost half the sky, giving astronomers some 20 million stars to analyze. Over 10 years there will be 50 million gigabytes of raw data collected.
3) Mine data. “Most astronomy data is thrown away but some can hold deep physical information that we don’t know how to extract,” says Joshua Peek from the Space Telescope Science Institute.
Why it’s hot:
Algorithms have helped astronomers for a while, but recent advances in AI — especially image recognition and faster, more inexpensive computing power —mean the techniques can be used by more researchers. The new AI will automate the process and be able to understand and identify things that humans may not even know exists or begin to understand.
“How do you write software to discover things that you don’t know how to describe?There are normal unusual events, but what about the ones we don’t even know about? How do you handle those? That will be where real discoveries happen because by definition you don’t know what they are.” – Tom Vestrand National Laboratory
The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.
“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”
Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.
Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).
Why It’s Hot:
Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.
New York startup Finery has created an AI-powered operating system that will organize your wardrobe.
It provides an automated system that reminds women what options they have, as well as creating outfits for them – saving users a lot of time and money (as they won’t mistakenly buy another grey cashmere jumper if they know they already have three at home).
Users link The Wardrobe Operating System to their email address, so the platform can browse through their mailbox to find their shopping history. All the items they’ve purchased online are then transferred to their digital wardrobe (with 93% accuracy).
Any clothing bought from a bricks-and-mortar shop can be added as well, but that’s done manually by either searching the Finery database for the item or uploading an image (either one you’ve taken or one from the internet). Finery uses Cloud Vision to identify what the object is (skirt, dress, trousers, etc.), the color and the material – then the brand and size can be added manually.
Once your clothing is all uploaded, the platform uses algorithms to recommend outfits based on the pieces you own as well as recommending future purchases that would match with your current items.
Users can also create and save outfits within the platform. And, if they give Finery access to their shopping accounts, the startup will aggregate all their unpurchased shopping cart items into a single Wishlist and alert them when said items go on sale.
Finery will alert its users when the return window for an item they’ve purchased is closing. And it will also let them know if they already own an item that looks similar to one they are planning on buying.
Finery has currently partnered with over 500 stores, equivalent to more than 10,000 brands, to create its online catalog. ‘That covers about ninety percent of the retail market.
Next, the company will be expanding into children’s clothing, and then men’s fashion. And it’s working on developing algorithms to suggest outfit combinations based on weather, location and personal preference, as well as a personalized recommendations tool for items not yet in user’s closets.
Why It’s Hot:
This personal “stylist” gives courage to fashion-handicaps (like myself) to shop online with confidence
It helps avoid unnecessary fashion splurges – BFD considering the average woman spends $250 -$350K on clothes over their lifetime
Acts as a fashion-dream catcher that helps grant your wish list by making purchases easy
One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.
This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.
Why It’s Hot:
AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”
Robots are making their way into schools and education to help children lower their stress and boost their creativity. Among those who have diseases such as diabetes and autism, robots can even help restore their self-confidence.
One research shows that autism children engage better with robots than humans because they are simple and predictable.
Another research that works with children with diabetes makes their robots “imperfect” and have them make mistakes so they don’t intimidate the children. Children learn that they don’t have to be perfect all the time.
Why it’s hot (or not): are robots the right companions for children? What impact would it have on human interactions if children are exposed to AI at such a young age?
Slack CEO Stewart Butterfield recently spoke to MIT Technology Review about the ways the company plans to use AI to keep people from feeling overwhelmed with data. Some interesting tidbits from the interview…
When asked about goals for Slack’s AI research group, Butterfield pointed to search. “You could imagine an always-on virtual chief of staff who reads every single message in Slack and then synthesizes all that information based on your preferences, which it has learned about over time. And with implicit and explicit feedback from you, it would recommend a small number of things that seem most important at the time.”
When asked what else the AI group was researching, Butterfield answered Organizational Insights. “I would—and I think everyone would—like to have a private version of a report that looks at things like: Do you talk to men differently than you talk to women? Do you talk to superiors differently than you talk to subordinates? Do you use different types of language in public vs. private? In what conversations are you more aggressive, and in what conversations are you more kind? If it turns out you tend to be accommodating, kind, and energetic in the mornings, and short-tempered and impatient in the afternoon, then maybe you need to have a midafternoon snack.”
Why It’s Hot
The idea of analyzing organizational conversation to learn about and solve collaboration and productivity issues is incredibly intriguing – and as always with these things, something to keep an eye on to ensure the power is used for good.
Deere & Company has signed an agreement to acquire Blue River Technology, a leader in applying machine learning to agriculture.
Blue River has designed and integrated computer vision and machine learning technology that will enable growers to reduce the use of herbicides by spraying only where weeds are present, optimizing the use of inputs in farming – a key objective of precision agriculture.
“Blue River is advancing precision agriculture by moving farm management decisions from the field level to the plant level,” said Jorge Heraud, co-founder and CEO of Blue River Technology. “We are using computer vision, robotics, and machine learning to help smart machines detect, identify, and make management decisions about every single plant in the field.”
AI is continuing to rule the press headlines across all industries. No matter who you are or what you do, your life will somehow be affected by artificial intelligence. Below are just a few charts recently published by the Electronic Frontier Foundation on how quickly AI is catching up with humans.
Why It’s Hot:
Artificial intelligence will continue to get better over time. So much so that Researchers at Oxford and Yale predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.
At the same time Google announced it is launching an auto-reply system that scans emails and generates possible responses to choose from.
The new functionality, added to the app store versions of Gmail, works by analyzing a large, anonymized body of email to generate possible responses. Machine-learning systems then rank these to pick the “best responses to the email at hand”. Google is keen to emphasise that its system knows its limits. Not everything merits an automated response – only about one-third of emails are covered.
Most email is unnecessary and most email responses are perfunctory acknowledgements – verbal read-receipts. In the war for control of your inbox, Gmail may have given us an important missile defence shield. Nice! Thanks! Love it!
It’s no surprise that climate change is inciting detrimental effects on our planet, but one of the most troubling is its effect on agriculture. The MIT Media Lab is hoping to remedy this by using special “food computers” to create the perfect climates for growing food, no matter the location or time of year. That means that not only could countries farm their local crops all year round, but they could also grow crops that are not native to their region of the world, meaning they could have fresh produce on-demand. Say goodbye to having to wait for shipments!
The Open Agriculture Initiative Personal Food Computer was first created in 2015, and can study and replicate the best growing conditions for specific plants with the use of sensors, actuators and machine vision. The Personal Food Computer can alter the light, nutrients and salinity of water. As the computer watches a plant, like basil, grow, it picks up data that can be used on the next set of crops. The research team is also trying to make the food itself tastier by maximizing the number of volatile molecules inside the crop, which is made possible by leaving the computer on constantly.
Babak Hodjat, CEO of Sentient says it’s all about engineering food in a totally different way: “Ultimately, this is non-GMO GMO. You’re not messing with the plant’s DNA. You’re just allowing it to exhibit the behavior it would in nature should that kind of environment exist.”
Rolling with the punches, so to speak. In the case of environmental change, we can adapt. Looking at something like this at scale — could be an innovation that shifts how we approach agriculture and could also inspire additional environmental innovation.
352 AI experts forecast a 50% chance AI will outperform humans in all tasks within 45 years, and take all jobs from humans within 120 years. Many of the world’s leading experts on machine learning were among those they contacted, including Yann LeCun, director of AI research at Facebook, Mustafa Suleyman from Google’s DeepMind and Zoubin Ghahramani, director of Uber’s AI labs.
Get the full research document HERE. Go to page 14 to get details on predictions
Two days ago, Publicis Groupe CEO and chairman Arthur Sadoun announced that his network would be foregoing all awards, trade shows and other paid promotional efforts for more than a year while developing Marcel, an AI powered “professional assistant” that it plans to launch next June at the 2018 VivaTech conference in Paris.
This well-timed announcement, right in the middle of Cannes, has generated an huge flurry of press, with speculation about the real driver of the announcement ranging from “It’s just a publicity stunt” to ” “Make no mistake, this is purely about saving money in 2018 as growth has slowed to a crawl” to “It’s smart. Award shows are a misguided way to stroke a few people’s egos. On top of that there’s a ton of work being done for the sole purpose of winning awards. And the number of shows is ridiculous too.”
Regardless of intent, sticking to plan will be difficult. Creatives across Publicis are reportedly up in arms. Surely, the lack of opportunity to stock a trophy case will make it more difficult for Publicis to attract some top creative talent. And, of course, clients like awards too. Poaching is a real concern.
Having gotten sucked into the drama, we’ve read it all, and this response, from R/GA is a favorite. Quick wit has earned R/GA a share of Publicis’ spotlight!
Why it’s Hot: Publicity stunts. All the cool agencies are doing it.