Zillow plans to build AI into its search engine with the goal to transform the site from a real estate search engine to an assistant that understands what people want and are looking for. The idea is to learn and understand the types of criteria people are looking for and recommend homes based on that.
For example, the AI will be able to understand your taste in decor. It’ll be able to take into account the interior photos of homes people are looking at, understand what they might like and make recommendations based on that.
Why it’s hot (or not): There’s a chance that a home buyer might miss a house that has a lot of potential but does not meet the right criteria according to AI.
Duplex, Google’s robot assistant, now makes eerily lifelike phone calls for you.
The unsettling feature, which will be available to the public later this year, is enabled by a technology called Google Duplex, which can carry out “real world” tasks on the phone, without the other person realising they are talking to a machine. The assistant refers to the person’s calendar to find a suitable time slot and then notifies the user when an appointment is scheduled.
During demonstrations, the virtual assistant did not identify itself and instead appeared to deceive the human at the end of the line. However, in the blogpost, the company indicated that might change.
“It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.”
Why It’s, Ummmm, Hot
Another entry in our ‘is it good, is it bad’ AI collection. Helpful if used ethically? Maybe. Scary if abused? Absolutely.
If you’re experiencing a panic attack in the middle of the day or want to vent or need to talk things out before going to sleep, you can connect with Tess the mental health chatbot through an instant-messaging app such as Facebook Messenger (or, if you don’t have an internet connection, just text a phone number).
Tess is the the brainchild of Michiel Rauws, the founder of X2 AI, an artificial-intelligence startup in Silicon Valley. The company’s mission is to use AI to provide affordable and on-demand mental health support.
A Canadian non-profit that primarily delivers health care to people in their own homes, Saint Elizabeth recently approved Tess as a part of its caregiver in the workplace program and will be offering the chatbot as a free service for staffers.
To provide caregivers with appropriate coping mechanisms, Tess first needed to learn about their emotional needs. In her month-long pilot with the facility, she exchanged over 12,000 text messages with 34 Saint Elizabeth employees. The personal support workers, nurses and therapists that helped train Tess would talk to her about what their week was like, if they lost a patient, what kind of things were troubling them at home – things you might tell your therapist. If Tess gave them a response that wasn’t helpful, they would tell her, and she would remember her mistake. Then her algorithm would correct itself to provide a better reply for next time.
On a fateful day in November of 1963, JFK never got to make his “Trade Mart” speech in Dallas. But thanks to the UK’s The Times and friends, we now have a glimpse at what that speech would’ve sounded like that day. Using the speech’s text and AI, The Times:
“Collected 831 analog recordings of the president’s previous speeches and interviews, removing noise and crosstalk through audio processing as well as using spectrum analysis tools to enhance the acoustic environment. Each audio file was then transferred into the AI system, where they used methods such as deep learning to understand the president’s unique tone and quirks expressed in his speech. In the end, the sound engineers took 116,777 sound units from the 831 clips to create the final audio.”
Why It’s Hot:
It seems we’re creating a world where anyone could be imitated scientifically. While in an instance like this, it’s great – to hear JFK’s words spoken, especially the sentiment in the clip above, was a joy for someone who cares about history and this country, especially given its current climate. But what if the likeness wasn’t recreated to deliver a speech written by him during his time, but rather something he never actually said or intended to say? Brings a whole new meaning to “fake news”.
Norman isn’t your typical AI who’s here for you to just ask random questions when you’re bored. Oh no, Norman here was created by researchers at MIT as an April Fools prank. At the beginning of its creation, it was exposed to “the darkest corners of Reddit” which resulted in the development of its psychopathic data processing tendencies. The MIT researchers define norman as;
“A psychotic AI suffering from chronic hallucinatory disorder; donated to science by the MIT Media Laboratory for the study of the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms”
Because of the neural network’s dark tendencies, the project’s website states that Norman is being “kept in an isolated server room, on a computer that has no access to the internet or communication channels to other devices.” As an additional security measure, the room also has weapons such as hammers, saws, and blow-torches in case there happens to be any kind of emergency or malfunction of the AI that would require it to be destroyed immediately.
Norman’s neural network is so far gone that researchers believe that ” “some of the encodings of the hallucinatory disorders reside in its hardware and there’s something fundamentally evil in Norman’s architecture that makes his re-training impossible.” Even after being exposed to neutral holograms of cute kittens and other fun and magical stuff, Norman essentially is so far gone that it’s just evil. While being presented with Rorschach inkblot images, Norman just went … well, let’s say in the comic universe, it’d be the ideal villain.
Why it’s hot:
We all know that AI is going to take over the world and that technology seems to be controlling us more than we’re controlling it but this almost perfectly depicts the dangers that could result in AI being developed using violence-fueled datasets.
The Brazilian edition of business magazine Forbes has created a provocative strategy to spotlight the issue of corruption, which is flourishing while the nation continues to struggle economically.
Working with Ogilvy Brazil, Forbes has personified the issue by creating a fictional character to represent the estimated $61bn that corruption costs the nation annually. The result is Ric Brasil, an AI-generated avatar whose aggregated ‘earnings’ from white collar crime would place him at number 8 in the upcoming Forbes 2018 billionaire list.
The features and persona of Ric Brasil have been developed by technology companies Nexo and Notan drawing on existing data and images held on convicted corporate criminals. Over the last eight months this material has been analysed along with information sourced from media reports, witness statements, interviews and books covering two of Brazil’s most infamous corruption cases.
According to the magazine’s CEO, Antonio Camarotti, ‘Forbes wants to take a stand against corruption. We thought of this campaign as a way not only to raise public awareness to the extent of the issue, but also to value honest business people—those who comply with their duties, pay taxes, and shun taxpayer’s money as a way to make a fortune. Someone who won’t let himself be lured into corruption practices.’
Members of the press will be able to interview Ric Brasil in the run up to the launch of the billionaires list on April 16.
Part of the problem with corporate crime is that while it has a cost, it’s often hard to find a way to channel public anger against what can feel like a victimless crime. By literally putting a face on an intangible, distributed crime – vividly ‘bringing the problem to life’ – Forbes has a better chance of getting people to connect with the issue.
In the latest episode of life imitating art is a Y Combinator startup whose proposition is essentially uploading your brain to the cloud. Per the source: “Nectome is a preserve-your-brain-and-upload-it company. Its chemical solution can keep a body intact for hundreds of years, maybe thousands, as a statue of frozen glass. The idea is that someday in the future scientists will scan your bricked brain and turn it into a computer simulation. That way, someone a lot like you, though not exactly you, will smell the flowers again in a data server somewhere.”
Why It’s Hot:
What’s not hot is you have to die in order to do it, but what’s interesting is the idea of exploring our consciousness as almost iPhone storage. That reincarnation by technology could be possible.
Now to those who believe in prophecies this may seem like the end of the world. To be frank, a lot of people think that this may be a step too far … but it’s for science! Apparently someone at Swedish funeral agency, thought it would be brilliant if they can create an AI “replicate” of deceased loved ones so that families can have them back in their lives. They’re asking for donations (yes they’re asking for all the corpses) so that they can try to create a synthetic replica of the deceased’s voice.
Why it’s not hot:
Basically The world is going to end and we’re just going to be replaced by the AI replicas of the dead. Fun.
This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.
They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.
As explained via The Verge:
“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).
When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.“
Why It’s Hot:
This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.
Google is rolling out a few new features to its Google Flights search engine to help travelers tackle some of the more frustrating aspects of air travel – delays and the complexities of the cheaper, Basic Economy fares. Google Flights will take advantage of its understanding of historical data and its machine learning algorithms to predict delays that haven’t yet been flagged by airlines themselves.
Explains Google, the combination of data and A.I. technologies means it can predict some delays in advance of any sort of official confirmation. Google says that it won’t actually flag these in the app until it’s at least 80 percent confident in the prediction, though.
It will also provide reasons for the delays, like weather or an aircraft arriving late.
You can track the status of your flight by searching for your flight number or the airline and flight route, notes Google. The delay information will then appear in the search results.
The other new feature added aims to help travelers make sense of what Basic Economy fares include and exclude with their ticket price.Google Flights will now display the restrictions associated with these fares – like restrictions on using overhead space or the ability to select a seat, as well as the fare’s additional baggage fees. It’s initially doing so for American, Delta and United flights worldwide.
Great example of using AI and predictive methods to drive better customer experience, and combat an industry that is less-than-transparent usually. It makes Google’s search solutions more desired and solidifies it as THE place to search everything. Would like to see if the alerts could get actionable, though, as right now they are more anxiety-creators.
The next generation of powerful telescopes will scan millions of stars and generate massive amounts of data that astronomers will be tasked with analyzing. That’s way too much data for people to sift through and model themselves — so astronomers are turning to AI to help them do it.
How they’re using it:
1) Coordinate telescopes.The large telescopes that will survey the sky will be looking for transient events — new signals or sources that “go bump in the night,” says Los Alamos National Laboratory’s Tom Vestrand.
2) Analyze data.Every 30 minutes for two years, NASA’s new Transiting Exoplanet Survey Satellite will send back full frame photos of almost half the sky, giving astronomers some 20 million stars to analyze. Over 10 years there will be 50 million gigabytes of raw data collected.
3) Mine data. “Most astronomy data is thrown away but some can hold deep physical information that we don’t know how to extract,” says Joshua Peek from the Space Telescope Science Institute.
Why it’s hot:
Algorithms have helped astronomers for a while, but recent advances in AI — especially image recognition and faster, more inexpensive computing power —mean the techniques can be used by more researchers. The new AI will automate the process and be able to understand and identify things that humans may not even know exists or begin to understand.
“How do you write software to discover things that you don’t know how to describe?There are normal unusual events, but what about the ones we don’t even know about? How do you handle those? That will be where real discoveries happen because by definition you don’t know what they are.” – Tom Vestrand National Laboratory
The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.
“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”
Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.
Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).
Why It’s Hot:
Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.
New York startup Finery has created an AI-powered operating system that will organize your wardrobe.
It provides an automated system that reminds women what options they have, as well as creating outfits for them – saving users a lot of time and money (as they won’t mistakenly buy another grey cashmere jumper if they know they already have three at home).
Users link The Wardrobe Operating System to their email address, so the platform can browse through their mailbox to find their shopping history. All the items they’ve purchased online are then transferred to their digital wardrobe (with 93% accuracy).
Any clothing bought from a bricks-and-mortar shop can be added as well, but that’s done manually by either searching the Finery database for the item or uploading an image (either one you’ve taken or one from the internet). Finery uses Cloud Vision to identify what the object is (skirt, dress, trousers, etc.), the color and the material – then the brand and size can be added manually.
Once your clothing is all uploaded, the platform uses algorithms to recommend outfits based on the pieces you own as well as recommending future purchases that would match with your current items.
Users can also create and save outfits within the platform. And, if they give Finery access to their shopping accounts, the startup will aggregate all their unpurchased shopping cart items into a single Wishlist and alert them when said items go on sale.
Finery will alert its users when the return window for an item they’ve purchased is closing. And it will also let them know if they already own an item that looks similar to one they are planning on buying.
Finery has currently partnered with over 500 stores, equivalent to more than 10,000 brands, to create its online catalog. ‘That covers about ninety percent of the retail market.
Next, the company will be expanding into children’s clothing, and then men’s fashion. And it’s working on developing algorithms to suggest outfit combinations based on weather, location and personal preference, as well as a personalized recommendations tool for items not yet in user’s closets.
Why It’s Hot:
This personal “stylist” gives courage to fashion-handicaps (like myself) to shop online with confidence
It helps avoid unnecessary fashion splurges – BFD considering the average woman spends $250 -$350K on clothes over their lifetime
Acts as a fashion-dream catcher that helps grant your wish list by making purchases easy
One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.
This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.
Why It’s Hot:
AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”
Robots are making their way into schools and education to help children lower their stress and boost their creativity. Among those who have diseases such as diabetes and autism, robots can even help restore their self-confidence.
One research shows that autism children engage better with robots than humans because they are simple and predictable.
Another research that works with children with diabetes makes their robots “imperfect” and have them make mistakes so they don’t intimidate the children. Children learn that they don’t have to be perfect all the time.
Why it’s hot (or not): are robots the right companions for children? What impact would it have on human interactions if children are exposed to AI at such a young age?
Slack CEO Stewart Butterfield recently spoke to MIT Technology Review about the ways the company plans to use AI to keep people from feeling overwhelmed with data. Some interesting tidbits from the interview…
When asked about goals for Slack’s AI research group, Butterfield pointed to search. “You could imagine an always-on virtual chief of staff who reads every single message in Slack and then synthesizes all that information based on your preferences, which it has learned about over time. And with implicit and explicit feedback from you, it would recommend a small number of things that seem most important at the time.”
When asked what else the AI group was researching, Butterfield answered Organizational Insights. “I would—and I think everyone would—like to have a private version of a report that looks at things like: Do you talk to men differently than you talk to women? Do you talk to superiors differently than you talk to subordinates? Do you use different types of language in public vs. private? In what conversations are you more aggressive, and in what conversations are you more kind? If it turns out you tend to be accommodating, kind, and energetic in the mornings, and short-tempered and impatient in the afternoon, then maybe you need to have a midafternoon snack.”
Why It’s Hot
The idea of analyzing organizational conversation to learn about and solve collaboration and productivity issues is incredibly intriguing – and as always with these things, something to keep an eye on to ensure the power is used for good.
Deere & Company has signed an agreement to acquire Blue River Technology, a leader in applying machine learning to agriculture.
Blue River has designed and integrated computer vision and machine learning technology that will enable growers to reduce the use of herbicides by spraying only where weeds are present, optimizing the use of inputs in farming – a key objective of precision agriculture.
“Blue River is advancing precision agriculture by moving farm management decisions from the field level to the plant level,” said Jorge Heraud, co-founder and CEO of Blue River Technology. “We are using computer vision, robotics, and machine learning to help smart machines detect, identify, and make management decisions about every single plant in the field.”
AI is continuing to rule the press headlines across all industries. No matter who you are or what you do, your life will somehow be affected by artificial intelligence. Below are just a few charts recently published by the Electronic Frontier Foundation on how quickly AI is catching up with humans.
Why It’s Hot:
Artificial intelligence will continue to get better over time. So much so that Researchers at Oxford and Yale predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.
At the same time Google announced it is launching an auto-reply system that scans emails and generates possible responses to choose from.
The new functionality, added to the app store versions of Gmail, works by analyzing a large, anonymized body of email to generate possible responses. Machine-learning systems then rank these to pick the “best responses to the email at hand”. Google is keen to emphasise that its system knows its limits. Not everything merits an automated response – only about one-third of emails are covered.
Most email is unnecessary and most email responses are perfunctory acknowledgements – verbal read-receipts. In the war for control of your inbox, Gmail may have given us an important missile defence shield. Nice! Thanks! Love it!
It’s no surprise that climate change is inciting detrimental effects on our planet, but one of the most troubling is its effect on agriculture. The MIT Media Lab is hoping to remedy this by using special “food computers” to create the perfect climates for growing food, no matter the location or time of year. That means that not only could countries farm their local crops all year round, but they could also grow crops that are not native to their region of the world, meaning they could have fresh produce on-demand. Say goodbye to having to wait for shipments!
The Open Agriculture Initiative Personal Food Computer was first created in 2015, and can study and replicate the best growing conditions for specific plants with the use of sensors, actuators and machine vision. The Personal Food Computer can alter the light, nutrients and salinity of water. As the computer watches a plant, like basil, grow, it picks up data that can be used on the next set of crops. The research team is also trying to make the food itself tastier by maximizing the number of volatile molecules inside the crop, which is made possible by leaving the computer on constantly.
Babak Hodjat, CEO of Sentient says it’s all about engineering food in a totally different way: “Ultimately, this is non-GMO GMO. You’re not messing with the plant’s DNA. You’re just allowing it to exhibit the behavior it would in nature should that kind of environment exist.”
Rolling with the punches, so to speak. In the case of environmental change, we can adapt. Looking at something like this at scale — could be an innovation that shifts how we approach agriculture and could also inspire additional environmental innovation.
352 AI experts forecast a 50% chance AI will outperform humans in all tasks within 45 years, and take all jobs from humans within 120 years. Many of the world’s leading experts on machine learning were among those they contacted, including Yann LeCun, director of AI research at Facebook, Mustafa Suleyman from Google’s DeepMind and Zoubin Ghahramani, director of Uber’s AI labs.
Get the full research document HERE. Go to page 14 to get details on predictions
Two days ago, Publicis Groupe CEO and chairman Arthur Sadoun announced that his network would be foregoing all awards, trade shows and other paid promotional efforts for more than a year while developing Marcel, an AI powered “professional assistant” that it plans to launch next June at the 2018 VivaTech conference in Paris.
This well-timed announcement, right in the middle of Cannes, has generated an huge flurry of press, with speculation about the real driver of the announcement ranging from “It’s just a publicity stunt” to ” “Make no mistake, this is purely about saving money in 2018 as growth has slowed to a crawl” to “It’s smart. Award shows are a misguided way to stroke a few people’s egos. On top of that there’s a ton of work being done for the sole purpose of winning awards. And the number of shows is ridiculous too.”
Regardless of intent, sticking to plan will be difficult. Creatives across Publicis are reportedly up in arms. Surely, the lack of opportunity to stock a trophy case will make it more difficult for Publicis to attract some top creative talent. And, of course, clients like awards too. Poaching is a real concern.
Having gotten sucked into the drama, we’ve read it all, and this response, from R/GA is a favorite. Quick wit has earned R/GA a share of Publicis’ spotlight!
Why it’s Hot: Publicity stunts. All the cool agencies are doing it.
ake Amazon wants its Prime subscribers ordering from its online store all the time, so it just cooked up a new device to help them do exactly that — and it’s essentially giving it away for free.
The company just launched a new instant-ordering gadget, the Dash Wand, that lets you fill up your Amazon shopping cart by using voice commands or scanning barcodes on the packages you have sitting in your kitchen cupboards.
The Dash Wand is essentially an updated version of the OG Amazon Dash wand that debuted in 2015, but this newer version crucially adds Amazon’s artificially intelligent assistant, Alexa, to help out. The digital assistant can sync your shopping list across Amazon devices, convert units of measurement, and search for recipes.
This is a huge upgrade for Amazon’s instant-ordering devices. The original Dash was significantly bigger, cost more than twice as much as this new one, and only worked with AmazonFresh orders.
Amazon’s really pushing the Wand, offering a similar deal to previous promotions for its instant ordering Dash buttons. If you buy a Dash Wand for $20, you’ll qualify immediately for $20 credit for your next purchase after registering the device. It literally pays for itself — and you can opt-in for a free 90-day AmazonFresh trial, which typically costs $15 per month. It’s actually a pretty great deal for anyone with a Prime subscription.
The Wand is also magnetic, so it can live on your fridge close to all of your most frequently ordered foods, and its Alexa access makes it more useful than the Dash buttons, which are restricted to one item instant ordering.
You don’t get the full Alexa experience here, though. The Wand can’t play music, and its press-button functionality means it won’t automatically respond to the genial “Hey, Alexa” wake command.
It might sound ridiculous that the company is essentially giving the Wands away with all the discounts and incentives, but it’s a savvy business move. Making the shopping experience easier and offering a new Alexa toy to play with will only drive up orders, as if Amazon needs any help to keep its business afloat.
Connected AI experiences make the virtual assistant craze more useful. Amazon is pushing forward on many different ways to connect Alexa with other platforms, and this is a great example of a type of utility that in a few years we will wonder how we lived without.
The Mirai is Toyota’s car of the future. It runs on hydrogen fuel cells, gets 312 miles on a full tank and only emits water vapor. So, to target tech and science enthusiasts, the brand is running thousands of ads with messaging crafted based on their interests.
The catch? The campaign was written by IBM’s supercomputer, Watson. After spending two to three months training the AI to piece together coherent sentences and phrases, Saatchi LA began rolling out a campaign last week on Facebook called “Thousands of Ways to Say Yes” that pitches the car through short video clips.
Saatchi LA wrote 50 scripts based on location, behavioral insights and occupation data that explained the car’s features to set up a structure for the campaign. The scripts were then used to train Watson so it could whip up thousands of pieces of copy that sounded like they were written by humans.
Some genius developer has boldly chosen to experiment with perhaps the world’s most forgotten voice assistant, Microsoft Cortana, and imagined what interacting with her could be like if you added another dimension to it.
In his words – “It’s basically what I imagined Microsoft’s version of Alexa or Google Home would be like if they were to use the holographic AI sidekick from the Halo franchise.”
As seen in the video above, in his prototype, it’s as if you’re speaking to an actual artificial person, making the experience feel more human.
Why it’s hot:
Amazon recently released the Echo Show, which allows skillmakers to add a “face” to their interactions, but this makes that look like a kids toy. This shows how what started not long ago as primitive voice technology on a phone, could quickly turn into actual virtual assistants that look and act like humans, powered by the underlying technology. Plus, apparently 145 million people may not ignore they have access to Cortana in the future.
Photo from MIT Technology Review – Professional StarCraft player Byun Hyun Woo playing in the 2016 StarCraft II World Championship Series, which he won.
Scientists continue to train AI to compete professionally in classical strategic games like Chess and Go as a sort of basic Turing Test. Now that AI have shown their ability to out-maneuver humans in the latter examples, some consider StarCraft – a strategic multi-player game where players can compete to dominate the map as an alien race – to be AI’s next challenge.
“When you play StarCraft, you have to respond very quickly to lots of uncertainties and variables, but I’ve noticed that AI like AlphaGo isn’t that good at reacting to unexpected scenarios,” Byun says.
A StarCraft victory for an AI trained via reinforcement-learning would be proof that its intelligence is capable of executing both long and short-term decisions on the fly – and would bring AI one step closer to human-like decision making.
One of the brilliant minds at Google has developed an algorithm that can (and has) create video from a single image. The AI does this by predicting what each of the next frames would be based on the previous one, and in this instance did it 100,000 times to produce the 56 minute long video you see above. Per its creator:
“I used videos recorded from trains windows, with landscapes that moves from right to left and trained a Machine Learning (ML) algorithm with it. What you see at the beginning is what the algorithm produced after very little learnings. It learns more and more during the video, that’s why there are more and more realistic details. Learnings is updated every 20s. The results are low resolution, blurry, and not realistic most of the time. But it resonates with the feeling I have when I travel in a train. It means that the algorithm learned the patterns needed to create this feeling. Unlike classical computer generated content, these patterns are not chosen or written by a software engineer.
Why it’s hot:
Creativity and imagination have been among the most inimitable human qualities since forever. And anyone who’s ever created anything remotely artistic will tell you inspiration isn’t as easy as hitting ‘go’. While this demonstration looks more like something you’d see presented as an art school video project than a timeless social commentary regaled in a museum, it made me wonder – what if bots created art? Would artists compete with them? Would they give up their pursuit because bots can create at the touch of a button? Would this spawn a whole new area of human creativity out of the emotion of having your work held up next to programmatic art? Could artificial intelligence ever create something held up against real human creativity?
You might assume that technology and AI are neutral forces in this world. The truth is, our technology is biased and created in the image of its creators – as Melinda Gates and Fei-Fei Li argue in this interview, these are “guys with hoodies.”
Have you ever?
Tried on an Oculus Rift to find that the hardware does not fit your facial profile?
Had face tracking software totally fail because it wasn’t programmed to register your traits (standard human features such as eyes, a nose, a mouth)?
Had voice assistants / voice recognition not understand you due to your accent or dialect? Perhaps the voice assistant straight up doesn’t speak your native language.
Consider: Her and Ex Machina, two recent and popular representations of AI in cinema, both of which represent AI, and its characters’ interactions with AI, from the point of view of male psychology and desire.
As Gates points out:
“If we don’t get women and people of color at the table — real technologists doing the real work — we will bias systems. Trying to reverse that a decade or two from now will be so much more difficult, if not close to impossible.”
Together, Gates and Li are launching a national non-profit called AI4ALL, aimed at increasing the diversity of voices behind AI, and getting people of color and women educated in a field where they are highly underrepresented.
Why it’s hot:
AI has the potential to redefine our future. Where is the diversity of minds necessary to make it a future for ALL?
It’s a simple idea: turn a selfie into cartoon character stickers (emojis) of yourself. Google’s new Allo app, touted by the company as a “smart messaging app” that lets you “express yourself better” includes this new AI feature that uses your smartphone camera and facial recognition technology to generate detailed facial expressions to suit every emotion. According to Fast Co, Google thinks there are 563 quadrillion faces that the tool could generate.
“Illustrations let you bring emotional states in a way that selfies can’t.” Selfies are, by definition, idealizations of yourself. Emoji, by contrast, are distillations and exaggerations of how you feel. To that end, the emoji themselves are often hilarious: You can pick one of yourself as a slice of pizza, or a drooling zombie. “The goal isn’t accuracy,” explains Cornwell. “It’s to let someone create something that feels like themselves, to themselves.”
It’s another layer on personalization in social media and messaging apps that Snapchat and Instagram will look to integrate. It could also mean the end of Bitmoji as we know it.
On a deeper level, there could be many applications outside of entertainment for this type of technology. If you can use AI to better express how you feel to a doctor or nurse, for example, a whole new world of communication could be opened up.
And going broader, there’s a big question: as messaging apps get smarter and smarter, do our interactions through them become more or less valuable? When AI is the go-between, are we better expressing ourselves, or is it a substitute for real interaction?