Inside Amazon’s plan for Alexa to run your entire life

The creator of the famous voice assistant dreams of a world where Alexa is everywhere, anticipating your every need.

Speaking with MIT Technology Review, Rohit Prasad, Alexa’s head scientist, revealed further details about where Alexa is headed next. The crux of the plan is for the voice assistant to move from passive to proactive interactions. Rather than wait for and respond to requests, Alexa will anticipate what the user might want. The idea is to turn Alexa into an omnipresent companion that actively shapes and orchestrates your life. This will require Alexa to get to know you better than ever before.

In June at the re:Mars conference, he demoed [view from 53:54] a feature called Alexa Conversations, showing how it might be used to help you plan a night out. Instead of manually initiating a new request for every part of the evening, you would need only to begin the conversation—for example, by asking to book movie tickets. Alexa would then follow up to ask whether you also wanted to make a restaurant reservation or call an Uber.

A more intelligent Alexa

Here’s how Alexa’s software updates will come together to execute the night-out planning scenario. In order to follow up on a movie ticket request with prompts for dinner and an Uber, a neural network learns—through billions of user interactions a week—to recognize which skills are commonly used with one another. This is how intelligent prediction comes into play. When enough users book a dinner after a movie, Alexa will package the skills together and recommend them in conjunction.

But reasoning is required to know what time to book the Uber. Taking into account your and the theater’s location, the start time of your movie, and the expected traffic, Alexa figures out when the car should pick you up to get you there on time.

Prasad imagines many other scenarios that might require more complex reasoning. You could imagine a skill, for example, that would allow you to ask your Echo Buds where the tomatoes are while you’re standing in Whole Foods. The Buds will need to register that you’re in the Whole Foods, access a map of its floor plan, and then tell you the tomatoes are in aisle seven.

In another scenario, you might ask Alexa through your communal home Echo to send you a notification if your flight is delayed. When it’s time to do so, perhaps you are already driving. Alexa needs to realize (by identifying your voice in your initial request) that you, not a roommate or family member, need the notification—and, based on the last Echo-enabled device you interacted with, that you are now in your car. Therefore, the notification should go to your car rather than your home.

This level of prediction and reasoning will also need to account for video data as more and more Alexa-compatible products include cameras. Let’s say you’re not home, Prasad muses, and a Girl Scout knocks on your door selling cookies. The Alexa on your Amazon Ring, a camera-equipped doorbell, should register (through video and audio input) who is at your door and why, know that you are not home, send you a note on a nearby Alexa device asking how many cookies you want, and order them on your behalf.

To make this possible, Prasad’s team is now testing a new software architecture for processing user commands. It involves filtering audio and visual information through many more layers. First Alexa needs to register which skill the user is trying to access among the roughly 100,000 available. Next it will have to understand the command in the context of who the user is, what device that person is using, and where. Finally it will need to refine the response on the basis of the user’s previously expressed preferences.

Why It’s Hot:  “This is what I believe the next few years will be about: reasoning and making it more personal, with more context,” says Prasad. “It’s like bringing everything together to make these massive decisions.”

Google Claims a Quantum Breakthrough That Could Change Computing

Google said on Wednesday that it had achieved a long-sought breakthrough called “quantum supremacy,” which could allow new kinds of computers to do calculations at speeds that are inconceivable with today’s technology.

The Silicon Valley giant’s research lab in Santa Barbara, Calif., reached a milestone that scientists had been working toward since the 1980s: Its quantum computer performed a task that isn’t possible with traditional computers, according to a paper published in the science journal Nature.

A quantum machine could one day drive big advances in areas like artificial intelligence and make even the most powerful supercomputers look like toys. The Google device did in 3 minutes 20 seconds a mathematical calculation that supercomputers could not complete in under 10,000 years, the company said in its paper.

Scientists likened Google’s announcement to the Wright brothers’ first plane flight in 1903 — proof that something is really possible even though it may be years before it can fulfill its potential.

Still, some researchers cautioned against getting too excited about Google’s achievement since so much more work needs to be done before quantum computers can migrate out of the research lab. Right now, a single quantum machine costs millions of dollars to build.

Many of the tech industry’s biggest names, including Microsoft, Intel and IBM as well as Google, are jockeying for a position in quantum computing. And venture capitalists have invested more than $450 million into start-ups exploring the technology, according to a recent study.

China is spending $400 million on a national quantum lab and has filed almost twice as many quantum patents as the United States in recent years. The Trump administration followed suit this year with its own National Quantum Initiative, promising to spend $1.2 billion on quantum research, including computers.

A quantum machine, the result of more than a century’s worth of research into a type of physics called quantum mechanics, operates in a completely different manner from regular computers. It relies on the mind-bending ways some objects act at the subatomic level or when exposed to extreme cold, like the metal chilled to nearly 460 degrees below zero inside Google’s machine.

“We have built a new kind of computer based on some of the unusual capabilities of quantum mechanics,” said John Martinis, who oversaw the team that managed the hardware for Google’s quantum supremacy experiment. Noting the computational power, he added, “We are now at the stage of trying to make use of that power.”

On Monday, IBM fired a pre-emptive shot with a blog post disputing Google’s claim that its quantum calculation could not be performed by a traditional computer. The calculation, IBM argued, could theoretically be run on a current computer in less than two and a half days — not 10,000 years.

“This is not about final and absolute dominance over classical computers,” said Dario Gil, who heads the IBM research lab in Yorktown Heights, N.Y., where the company is building its own quantum computers.

Other researchers dismissed the milestone because the calculation was notably esoteric. It generated random numbers using a quantum experiment that can’t necessarily be applied to other things.

As its paper was published, Google responded to IBM’s claims that its quantum calculation could be performed on a classical computer. “We’ve already peeled away from classical computers, onto a totally different trajectory,” a Google spokesman said in a statement. “We welcome proposals to advance simulation techniques, though it’s crucial to test them on an actual supercomputer, as we have.”

Source: NY Times

Why It’s Hot

It’s hard to even fathom what possibilities this opens, but it seems application is still a while away.

Immortalized in Film…? Not so fast.

Tencent Shows The Future Of Ads; Will Add Ads In Existing Movies, TV Shows

One of China’s largest online video platforms is setting out to use technology to integrate branded content into movies and TV shows from any place or era.

(Yes, a Starbucks on Tatooine…or Nike branded footwear for the first moonwalk.)

Why It’s Hot:  

  1. Potentially exponential expansion of available ad inventory
  2. Increased targetability by interest, plus top-spin of borrowed interest
  3. Additional revenue streams for content makers
  4. New questions of the sanctity of creative vision, narrative intent and historical truth

Advertising is an integral part of any business and with increasing competition, it’s more important than ever to be visible. Mirriad, a computer-vision and AI-powered platform company, recently announced its partnership with Tencent which is about the change the advertising game. If you didn’t know, Tencent is one of the largest online video platforms in China. So how does it change the advertising game, you ask?

Mirriad’s technology enables advertisers to reach their target audience by integrating branded content (or ads) directly into movies and TV series. So, for instance, if an actor is holding just a regular cup of joe in a movie, this new API will enable Tencent to change that cup of coffee into a branded cup of coffee. Matthew Brennan, a speaker and a writer who specialises in analysing Tencent & WeChat shared a glimpse of how this tech works.

While we’re not sure if these ads will be clickable, it’ll still have a significant subconscious impact, if not direct. Marketers have long talked of mood marketing that builds a personal connection between the brand and the targeted user. So, with the ability to insert ads in crucial scenes and moments, advertisers will now be able to engage with their target users in a way that wasn’t possible before.

Mirriad currently has a 2-year contract with Tencent where they’ll trial exclusively on the latter’s video platform. But if trials are successful in that they don’t offer a jarring viewing experience, we can soon expect this tech to go mainstream.

Phone a Friend: a mobile app for predicting teen suicide attempts

Rising suicide rates in the US are disproportionately affecting 10-24 year-olds, with suicide as the second leading cause of death after unintentional injuries. It’s a complex and multifaceted topic, and one that leaves those whose lives are impacted wondering what they could have done differently, to recognize the signs and intervene.

Researchers are fast at work figuring out whether a machine learning algorithm might be able to use data from an individual’s mobile device to assess risk and predict an imminent suicide attempt – before there may even be any outward signs. This work is part of the Mobile Assessment for the Prediction of Suicide (MAPS) study, involving 50 teenagers in New York and Pennsylvania. If successful, the effort could lead to a viable solution to an increasingly troubling societal problem.

Why It’s Hot

We’re just scratching the surface of the treasure trove of insights that might be buried in the mountains of data we’re all generating every day. Our ability to understand people more deeply, without relying on “new” sources of data, will have implications for the experiences brands and marketers deliver.

Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Amazon is rolling out StyleSnap, its AI-enabled shopping feature that helps you shop from a photograph or snapshot. Consumers upload images to the Amazon app and it considers factors like brand, price and reviews to recommend similar items.

Amazon has been able to leverage data from brands sold on its site to develop products that are good enough or close enough to the originals, usually at lower price points, and thereby gain an edge, but its still only a destination for basics like T-shirts and socks. With StyleSnap, Amazon is hoping to further crack the online retailing sector with this new offering.

Why It’s Hot

Snapping and sharing is already part of retail culture, and now Amazon is creating a simple and seamless way of adding the shop and purchase to this ubiquitous habit.  The combination of AI and user reviews in its algorithm could change the way we shop when recommendations aren’t only based on the look of an item, but also on how customers experience it.

 

Source: Forget a Thousand Words. Pictures Could Be Worth Big Bucks for Amazon Fashion – Adweek

Other sources: https://www.cnet.com/news/amazon-stylesnap-uses-ai-to-help-you-shop-for-clothes/

DeepMind? Pffft! More like “dumb as a bag of rocks.”

Google’s DeepMind AI project, self-described as “the world leader in artificial intelligence research” was recently tested against the type of math test that 16 year olds take in the UK. The result? It only scored a 14 out of 40 correct. Womp womp!

“The researchers tested several types of AI and found that algorithms struggle to translate a question as it appears on a test, full of words and symbols and functions, into the actual operations needed to solve it.” (Medium)

Image result for home d'oh

Why It’s Hot

There is no shortage of angst by humans worried about losing their jobs to AI. Instead of feeling a reprieve, humans should take this as a sign that AI might just be best designed to complement human judgements and not to replace them.

Woebot – Highly Praised App for Mental Health

AI counseling is the wave of the future. Cognitive Behavioral Therapy administered by a smart chatbot, via an app relying on SMS, has become highly popular and well reviewed. Woebot isn’t just the face of a trend, it’s a notable player in technology transforming healthcare.

Why It’s Hot

It’s not new. It’s better. The first counseling software was called Eliza. It was ~1966. Part of the difficulty was it required human intervention. Ironically, in 2019 when many believe a lack of human contact to be part of the problem, that void actually addresses a barrier in therapy. Perceived lack of anonymity and privacy. Sure therapist visits are confidential blah blah but people naturally have difficulty opening up in person. Plus there’s the waiting room anxiety. With an app, studies have shown that people get to the heart of their problem quicker.

Why it Matters

There’s a ton of demand for “talk therapy” and others. Human counselors can’t keep up. People wait weeks and months for appointments. That’s in the U.S. where they’re compensated well. In this On Demand age, that’s seen as unacceptable. Woebot, and others, address the market need for immediate gratification care. Another issue is cost. Therapy is expensive. Apps are obviously a solve here. No co-pay.

Obligatory Statement

All the apps remind users they’re no substitute for human counselors but they are helpful in reflecting behavior patterns and emotional red flags back to their users. At the very least, it’ll help you make the most of your next therapy visit.

tour the dali museum, with your host…DALI!

When Salvador Dali once said, “If someday I may die…I hope the people…will say, ‘Dali has died, but not entirely”, I’m not sure he knew how right he was. Using AI, his namesake museum in St. Petersburg, Florida has now “resurrected” Dali to welcome visitors, and provide commentary on his works as you move throughout the institution.

According to the museum, they did it by “pulling content from millions of frames of interviews with the artist and overlaying it onto an actor’s face–a digital mask, of sorts, that allowed the actor to appear as Dali whatever expression he made.” It also “cast another actor from Barcelona to ensure that the voice matched the countenance.”

Why it’s hot:

There’s no better experience if you want to learn about an individual and his/her art than to hear about it directly from that person. Especially when they’re as dynamic and memorable as Salvador Dali. Unfortunately, most individuals famous enough to have their own museum likely aren’t on hand to do that in person. Having a virtual Dali guide you through his works seems a perfect way to experience his brilliance as both an artist, and a human being.

[Source]

burger king’s “ai” TV campaign…


Burger King revealed several new TV spots that say they were “created by artificial intelligence”.

Via AdAge – “The brand’s statement claims that BK “decided to use high-end computing resources and big data to train an artificial neural network with advanced pattern recognition capabilities by analyzing thousands of fast-food commercials and competitive reports from industry research.” Burger King goes so far as to say that more than 300 commercials were created and tested in focus groups and says the ads will be the first ones created by an A.I. to air on national TV.”

But in reality, Burger King says it’s actually work done by real creatives, mocking the excitement around technology like AI.

According to BK, “we need to avoid getting lost in the sea of technology innovation and buzzwords and forget what really matters. And that’s the idea,” Marcelo Pascoa, Burger King’s global head of brand marketing, tells Ad Age in an emailed statement complete with the word “idea” in all caps. “Artificial intelligence is not a substitute for a great creative idea coming from a real person.”

Why it’s hot:

Is Burger King right here?

The spots they have created feel they could have been generated by even some primitive artificial intelligence. Japan’s “AI Creative Director” was more than a year ago, and its work was actually not far off from what you’d expect from a real creative. There seems to be a point missing here that AI is not meant to replace people, but to help people. Attempting to make a joke about the enthusiasm around technology, it seems Burger King might have actually shown us a glimpse at advertising’s future.

[Source]

stanford AI generates sound with zero training…

According to computer scientists at Stanford, they have “developed the first system for automatically synthesizing sounds to accompany physics-based computer animations” that “simulates sound from first physical principles” and most impressively, unlike other AI “no training data is required”.

Why it’s hot:

While most AI to date requires overt training in order to be able to properly synthesize an output, this requires none. It’s not the first AI to require no human-assistance, but the future that might have seemed years off for AI is rapidly advancing. If AI can construct sound from visuals based on physical principles, you have to wonder how hard it might be to construct physical objects based on sound.

[Source]

Soundtrack of Your Life

Royal Caribbean is partnering with Berklee College of music to set your vacation photos to music using AI.

 Source: https://www.adweek.com/creativity/royal-caribbean-now-sets-your-vacation-photos-to-music-using-ai/amp/

Launching this week, Royal Caribbean is launching an online tool that turns user images into mini-videos with original music assembled by AI and inspired by the images themselves.

A picture from a botanical garden, of red flowers and green leaves, generates two bars of smooth jazz. An elaborate piece of graffiti on a brick wall renders into a crunching hip-hop beat.

The machine-learning process entailed more than 600 hours in which Royal Caribbean and a team of musicians and technologists reviewed hundreds of music tracks along with 10,000 photos, matching each of the 2.5 million combinations to one of 11 moods.

The A.I. in SoundSeeker uses Google Cloud Vision to identify objects, facial expressions and colors in a user’s photo by referencing the roadmap developed by the leaders in music theory at Berklee.

Why it’s hot

Tourism industry is always at the forefront of individualization beyond personalization by making something so personal and making it truly unique.

discover places you never knew you wanted to stay…


AccorHotels launched something it calls the Seeker Project, a program that uses your heartbeat and instinctual reactions to different scenery to show you places its algorithm thinks you may want to visit.

There’s a website version anyone can try, but the whole thing started when a number of influences were invited to Toronto and “asked to wear a headband to monitor their alpha and gamma brain waves and wrist cuffs that measured their heart rate and skin response. The experience then determined whether that person was an introvert or extrovert, sought tranquility or adventure, or preferred modern to rustic environments.

The biometric data was then processed through a custom algorithm and produced into a psychographic illustration and the visitors received recommendations for dream destinations based on their personal data.”

It provided results looking something like the ones I got below:

“You are craving a chance to reconnect with the world in a warm destination. You have a preference for classic and traditional surroundings and need to recharge in a spa getaway. You feel most at home in the serenity of the outdoors. A romantic getaway is what your heart wants.”

Why It’s Hot: 

We can think we know what we want, and go after it, but how do we know there isn’t something else we really want? Using unconscious signals to make suggestions will allow them to help us uncover things we may never have known otherwise. Granted, it’s not revealing serious information like other biometric products we’ve seen recently. But, it’s interesting to see what’s possible now that we’re able to tap into biometric data in new ways.

[Source]

 

AI helps deliver JFK’s words from beyond the grave…

On a fateful day in November of 1963, JFK never got to make his “Trade Mart” speech in Dallas. But thanks to the UK’s The Times and friends, we now have a glimpse at what that speech would’ve sounded like that day. Using the speech’s text and AI, The Times:

“Collected 831 analog recordings of the president’s previous speeches and interviews, removing noise and crosstalk through audio processing as well as using spectrum analysis tools to enhance the acoustic environment. Each audio file was then transferred into the AI system, where they used methods such as deep learning to understand the president’s unique tone and quirks expressed in his speech. In the end, the sound engineers took 116,777 sound units from the 831 clips to create the final audio.”

Why It’s Hot:

It seems we’re creating a world where anyone could be imitated scientifically. While in an instance like this, it’s great – to hear JFK’s words spoken, especially the sentiment in the clip above, was a joy for someone who cares about history and this country, especially given its current climate. But what if the likeness wasn’t recreated to deliver a speech written by him during his time, but rather something he never actually said or intended to say? Brings a whole new meaning to “fake news”.

[Listen to the full 22 minute version straight from the Source]

Norman? More like No, Man.

This is Norman.

Norman isn’t your typical AI who’s here for you to just ask random questions when you’re bored. Oh no, Norman here was created by researchers at MIT as an April Fools prank. At the beginning of its creation, it was exposed to “the darkest corners of Reddit” which resulted in the development of its psychopathic data processing tendencies. The MIT researchers define norman as;

“A psychotic AI suffering from chronic hallucinatory disorder; donated to science by the MIT Media Laboratory for the study of the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms”

Because of the neural network’s dark tendencies, the project’s website states that Norman is being “kept in an isolated server room, on a computer that has no access to the internet or communication channels to other devices.” As an additional security measure, the room also has weapons such as hammers, saws, and blow-torches in case there happens to be any kind of emergency or malfunction of the AI that would require it to be destroyed immediately.

Norman’s neural network is so far gone that researchers believe that ” “some of the encodings of the hallucinatory disorders reside in its hardware and there’s something fundamentally evil in Norman’s architecture that makes his re-training impossible.” Even after being exposed to neutral holograms of cute kittens and other fun and magical stuff, Norman essentially is so far gone that it’s just evil. While being presented with Rorschach inkblot images, Norman just went … well, let’s say in the comic universe, it’d be the ideal villain.

Why it’s hot:
We all know that AI is going to take over the world and that technology seems to be controlling us more than we’re controlling it but this almost perfectly depicts the dangers that could result in AI being developed using violence-fueled datasets.

Source: norman-ai.mit.edu & LiveScience

google AI predicts heart attacks by scanning your eye…

This week, the geniuses at Google and its “health-tech subsidiary” Verily announced AI that can predict your risk of a major cardiac event with roughly the same accuracy as the currently-accepted method using just a scan of your eye.

They have created an algorithm that analyzes the back of your eye for important predictors of cardiovascular health “including age, blood pressure, and whether or not [you] smoke” to assess your risk.

As explained via The Verge:

“To train the algorithm, Google and Verily’s scientists used machine learning to analyze a medical dataset of nearly 300,000 patients. This information included eye scans as well as general medical data. As with all deep learning analysis, neural networks were then used to mine this information for patterns, learning to associate telltale signs in the eye scans with the metrics needed to predict cardiovascular risk (e.g., age and blood pressure).

When presented with retinal images of two patients, one of whom suffered a cardiovascular event in the following five years, and one of whom did not, Google’s algorithm was able to tell which was which 70 percent of the time. This is only slightly worse than the commonly used SCORE method of predicting cardiovascular risk, which requires a blood test and makes correct predictions in the same test 72 percent of the time.

Why It’s Hot:

This type of application of AI can help doctors quickly know what to look into, and shows how AI could help them spend less time diagnosing, and more time treating. It’s a long way from being completely flawless right now, but in the future, we might see an AI-powered robot instead of a nurse before we see the doctor.

[Source]

Astronomers Using AI to Analyze the Universe – Fast

The next generation of powerful telescopes will scan millions of stars and generate massive amounts of data that astronomers will be tasked with analyzing. That’s way too much data for people to sift through and model themselves — so astronomers are turning to AI to help them do it.

How they’re using it:

1) Coordinate telescopes. The large telescopes that will survey the sky will be looking for transient events — new signals or sources that “go bump in the night,” says Los Alamos National Laboratory’s Tom Vestrand.

2) Analyze data. Every 30 minutes for two years, NASA’s new Transiting Exoplanet Survey Satellite will send back full frame photos of almost half the sky, giving astronomers some 20 million stars to analyze. Over 10 years there will be 50 million gigabytes of raw data collected.

3) Mine data. “Most astronomy data is thrown away but some can hold deep physical information that we don’t know how to extract,” says Joshua Peek from the Space Telescope Science Institute.

Why it’s hot:

Algorithms have helped astronomers for a while, but recent advances in AI — especially image recognition and faster, more inexpensive computing power —mean the techniques can be used by more researchers. The new AI will automate the process and be able to understand and identify things that humans may not even know exists or begin to understand.

 “How do you write software to discover things that you don’t know how to describe?There are normal unusual events, but what about the ones we don’t even know about? How do you handle those? That will be where real discoveries happen because by definition you don’t know what they are.” – Tom Vestrand National Laboratory

 

 

 

dragon drive: jarvis for your car…

The wave of magical CES 2018 innovations has begun to roll in, and among those already announced is a company called Nuance Communications’s “Dragon Drive” – an (extremely) artificially intelligent assistant for your car.

According to Digital Trends

“By combining conversational artificial intelligence with a number of nonverbal cues, Dragon Drive helps you talk to your car as though you were talking to a person. For example, the AI platform now boasts gaze detection, which allows drivers to get information about and interact with objects and places outside of the car simply by looking at them and asking Dragon Drive for details. If you drive past a restaurant, you can simply focus your gaze at said establishment and say, “Call that restaurant,” or “How is that restaurant rated?” Dragon Drive provides a “meaningful, human-like response.”

Moreover, the platform enables better communication with a whole host of virtual assistants, including smart home devices and other popular AI platforms. In this way, Dragon Drive claims, drivers will be able to manage a host of tasks all from their cars, whether it’s setting their home heating system or transferring money between bank accounts.

Dragon Drive’s AI integration does not only apply to external factors, but to components within the car as well. For instance, if you ask the AI platform to find parking, Dragon Drive will take into consideration whether or not your windshield wipers are on to determine whether it ought to direct you to a covered parking area to avoid the rain. And if you tell Dragon Drive you’re cold, the system will automatically adjust the car’s climate (but only in your area, keeping other passengers comfortable).

Why It’s Hot:

Putting aside the question of how many AI assistants we might have in our connected future, what was really interesting to see was the integration of voice and eye tracking biometrics. Things like using your voice as your key (/to personalize your settings to you and your passengers), the car reminding you of memories that happened at locations you’re passing, and identifying stores/buildings/restaurants/other things along your route with just a gaze, it’s amazing to think what the future holds when all the technologies we’ve only just seen emerging in recent years converge.

[More info]

From smart homes to smart offices: Meet Alexa for Business

During AWS Reinvent Conference in Las Vegas, Amazon announced Alexa for Business Platform, along with a set of initial partners that have developed specific “skills” for business customers.

Their main goal seems to be aimed at making Alexa a key component to office workers:

 
– The first focus for Alexa for Business is the conference room. AWS is working with the likes of Polycom and other video and audio conferencing providers to enable this.

– Other partners are Microsoft ( to enable better support for its suite of productivity services) Concur (travel expenses) and Splunk ( big data generated by your technology infrastructure, security systems, and business applications), Capital One and Wework. 

But that’s just what they are planning to offer and the new platform will also let companies build out their own skills and integrations.

Why It’s hot: 
We are finally seeing those technologies give a step to being actually useful and mainstream. 
Since Amazon wants to integrate Alexa to other platforms, It can be an interesting tool for future innovations. 
Source: TechCrunch

zero training = zero problem, for AlphaGo Zero…


One of the major milestones in the relatively short history of AI is when Google’s AlphaGo beat the best human Go player in the world in three straight games early last year. In order to prepare AlphaGo for its match, Google trained it using games played by other Go players, so it could observe and learn which moves win and which don’t. It learned from essentially watching others.

This week, Google announced AlphaGo Zero, AI that completely taught itself to win at Go. All Google gave it was the rules, and by experimenting with moves on its own, it learned how to play, and beat its predecessor AlphaGo 100 games to zero after just over a month of training.

Why It’s Hot:

AI is becoming truly generative with what DeepMind calls “tabula rasa learning”. While a lot of AI we still see on a daily basis is extremely primitive in comparison, the future of AI is a machine’s ability to create things with basic information and a question. And ultimately, learning on its own can lead to better results. As researchers put it, “Even when reliable data sets are available, they may impose a ceiling on the performance of systems trained in this manner…By contrast, reinforcement learning systems are trained from their own experience, in principle allowing them to exceed human capabilities, and to operate in domains where human expertise is lacking.”

Robot, a kid’s best friend?

Robots are making their way into schools and education to help children lower their stress and boost their creativity. Among those who have diseases such as diabetes and autism, robots can even help restore their self-confidence.

One research shows that autism children engage better with robots than humans because they are simple and predictable.

Another research that works with children with diabetes makes their robots “imperfect” and have them make mistakes so they don’t intimidate the children. Children learn that they don’t have to be perfect all the time.

Why it’s hot (or not): are robots the right companions for children? What impact would it have on human interactions if children are exposed to AI at such a young age?

 

 

The Countdown Begins…AI Versus the World

AI is continuing to rule the press headlines across all industries. No matter who you are or what you do, your life will somehow be affected by artificial intelligence. Below are just a few charts recently published by the Electronic Frontier Foundation on how quickly AI is catching up with humans.


Why It’s Hot:

Artificial intelligence will continue to get better over time. So much so that Researchers at Oxford and Yale  predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years.

googler creates AI that creates video using one image…

One of the brilliant minds at Google has developed an algorithm that can (and has) create video from a single image. The AI does this by predicting what each of the next frames would be based on the previous one, and in this instance did it 100,000 times to produce the 56 minute long video you see above. Per its creator:

“I used videos recorded from trains windows, with landscapes that moves from right to left and trained a Machine Learning (ML) algorithm with it. What you see at the beginning is what the algorithm produced after very little learnings. It learns more and more during the video, that’s why there are more and more realistic details. Learnings is updated every 20s. The results are low resolution, blurry, and not realistic most of the time. But it resonates with the feeling I have when I travel in a train. It means that the algorithm learned the patterns needed to create this feeling. Unlike classical computer generated content, these patterns are not chosen or written by a software engineer.

Why it’s hot:

Creativity and imagination have been among the most inimitable human qualities since forever. And anyone who’s ever created anything remotely artistic will tell you inspiration isn’t as easy as hitting ‘go’. While this demonstration looks more like something you’d see presented as an art school video project than a timeless social commentary regaled in a museum, it made me wonder – what if bots created art? Would artists compete with them? Would they give up their pursuit because bots can create at the touch of a button? Would this spawn a whole new area of human creativity out of the emotion of having your work held up next to programmatic art? Could artificial intelligence ever create something held up against real human creativity?

better living partying through chemistry technology…


[about 2:05-2:45 should do it]

It’s not just a clever name, PartyBOT is your “Artificial Dance Assistant”, or ADA for short. Debuted at SxSW last month, ADA learns about party-goers musical tastes, drink preferences, and social savvy. Then, it uses facial and voice recognition to monitor the room, playing tunes tailored to the interests of those who aren’t partying hard enough as determined by their expressions and conversations. As described by its creators…

“The users’ relationship with the bot begins on a mobile application, where—through a facial recognition activity—the bot will learn to recognize the user and their emotions. Then, the bot will converse with the user about party staples—music, dancing, drinking and socializing—to learn about them and, most importantly, gauge their party potential. (Are they going to be a dance machine or a stick in the mud—the bot, as a bouncer of sorts, is here to find out.)

Upon arrival at the bar, the user will be recognized by PartyBOT, and throughout the party, the bot will work to ensure a personalized experience based on what it knows about them—their favorite music, beverages, and more. (For example, they might receive a notification when the DJ is playing one of their favorite songs.)”

Why it’s hot:

Obviously this was an intentionally lighthearted demonstration of using bot and other technology to improve an experience for people. Apart from knowing you like Bay Breezes, imagine the improved relationship brands could have with their customers by gathering initial preferences and using those to tailor experiences to each individual. Many times bots are thought of in very simplistic – question/answer/solution form – but this shows how combining AI with other emerging technologies can make for a much more personally exciting overall experience.

Today, To die. Google home can recognize multiple voices

Google Home can now be trained to identify the different voices of people you live with. Today Google announced that its smart speaker can support up to six different accounts on the same device. The addition of multi-user support means that Google Home will now tailor its answers for each person and know which account to pull data from based on their voice. No more hearing someone else’s calendar appointments.

So how does it work? When you connect your account on a Google Home, we ask you to say the phrases “Ok Google” and “Hey Google” two times each. Those phrases are then analyzed by a neural network, which can detect certain characteristics of a person’s voice. From that point on, any time you say “Ok Google” or “Hey Google” to your Google Home, the neural network will compare the sound of your voice to its previous analysis so it can understand if it’s you speaking or not. This comparison takes place only on your device, in a matter of milliseconds.

Why it’s hot?
-Everyone in the family gets a personal assistant.
-Imagine how it might work in a small business / office
-Once it starts recognizing more than six voices, can every department have its own AI assistant?

Read your EKG Instantly on your Phone with KARDIA

The world knows no deadlier assassin than heart disease. It accounts for one in four fatalities in the US. Early detection remains the key to saving lives, but catching problems at the right time too often relies upon dumb luck. The most effective way of identifying problems involves an EKG machine, a bulky device with electrodes and wires.

Most people visit a doctor for an electrocardiogram. That, too, is no guarantee, because the best detection means being tested when a potential problem reveals itself. Otherwise, early signs of heart disease might go undetected.

At-risk patients might find a compact, easy to use EKG machine a good option. Like so many other gadgets, portable EKG machines are getting ever smaller—just look at products like Zio, HeartCheck, and QuardioCore.

The Kardia from AliveCor is about the width of two sticks of gum. Stick the $100 device on the back of your phone or slip it into your wallet, place a few fingers on it for 30 seconds, and you’ve got a medical-grade EKG reading on your phone.

 

 

 

 

 

 

 

 

 

 

 

But the bigger story is not in the gadget’s size, but in what happens with the heart data it collects. The company uses neural networks and algorithms to identify signs of heart disease, an approach it hopes might change how cardiologists diagnose patients.

The company was successful at convincing FDA, MayoClinic and the investors that devices’ ease of use will lead to more frequent testing and increase the likelihood of early detection. About a month of use builds a heart profile and then Kardia’s data-driven algorithm can detect if something goes amiss. Your doctor receives a message only when the anomaly is detected.

Why it is hot: Future of diagnostics is in data-driven approach. With IBM Watson and other innovations in machine learning, we are up for a healthier future!!!

 

Dr. AI Helps Patients Gain Access to Clinical Expertise About Their Condition

According to an article from Access AI, HealthTap is introducing an artificial intelligence engine to triage cases automatically. Doctor A.I., is a personal AI-powered physician that provides patient with doctor recommended insights.

More than a billion people search the web for health information each year, with approximately 10 billion symptom related searches on Google alone. While many resources provide useful information, web search results can only provide content semantically related to symptoms. The new function from HealthTap aims to incorporate context and clinical expertise of doctors who have helped triage hundreds of millions of patients worldwide to provide the most effective course of treatment. Dr.A.I. uses HealthTap’s Health Operating System to analyse user’s current symptoms and cross checks this with the data provided from the personal health record they have created. Based on solutions that it has uncovered from its data, Dr.A.I. will tailor pathways ranging from suggesting the patient reads relevant doctor insights and content, to connecting the patient with a doctor for a live virtual consult, or from scheduling an in-person office visit with the right specialist, all the way to directing the patient to more urgent care, based on the patient’s symptoms and characteristics.

Why It’s Hot

At first glance, the apps looks like WebMD. Patients input their symptoms using a visual interface and the app spits back a diagnosis. Where this app differs though in the level of personalized recommendations that follow the diagnosis.

Through our SENSE and Journey Mapping work across our pharma clients, we know that patients are consulting Dr. Google both before and after they are diagnosed with a condition and prescribed a treatment where they are exposed to virtually limitless information about the condition and drug they’ve been prescribed from all kinds of sources, whether they have clinical expertise or not. In some severe cases, this can even stop patients from filling that prescription and taking the drug do to fear of side effects, intimating costs of the drug/lack of coverage, anxiety around administering the drug and on top of all that, apprehension that this is the correct treatment for them. Dr. AI has the potential circumvent a lot of that behavior by providing clinical expertise about the condition using the same deductive approach as HCP’s in a patient-focused interface.

Fake News Challenge: Using AI To Crush Fake News

The Fake News Challenge is a grassroots competition of over 100 volunteers and 71 teams from academia and industry, to find solutions to the problem of fake news.

The competition is designed to foster the development of new tools to help human fact checkers identify real news from fake news using machine learning, natural langauge processing, and AI.

 

http://www.fakenewschallenge.org/

 Why its hot:

  • When everyone can create content anywhere, its important that truth be validated and for misinformation to be identified.
  • This is an immensely important and complex task executed as a global hackathon spread over 6 months. Big challenges can be approached in new ways.
  • This challenge will result in new tools that could make its way into our publishing platforms, our social networks, etc – is this potentially good or bad for us?

 

Human-like robots edge closer to reality

If you’ve lived in fear of a futuristic robot rebellion, the newest creation from Google-owned Boston Dynamics won’t do much to ease your fears. The Atlas humanoid robot is probably the most lifelike, agile and resilient robot built to date.  As the video shows, it can walk on snow and keep its balance, open doors, stack 10-pound boxes on shelves and even pick itself up from the floor after being knocked down. And that’s where things get a little frightening.

Even though this is only a demonstration, Atlas’ handler abuses it by knocking boxes out of its hands and then shoving it in the back with a stick so it falls on the floor. But much like a ninja fighter, it springs back up and keeps on going. If you hearken back to Robo-cop, all this robot needs is a weapon to turn the tables on its human tormentor.

Why It’s Hot

Robots such as Atlas will some day be doing much of the back-breaking labor humans now do — picking crops, construction, fire fighting. But as the author of the cnet.com article where this appeared says, “Elon Musk once warned that Skynet (the evil artificial intelligence from the Terminator movies) could only be a few years off, and Google is increasingly looking like Skynet.” So while Atlas may act pretty cool and have good applications, it does have its ominous side.

Researchers create ‘self-aware’ Super Mario with artificial intelligence

Mario just got a lot a smarter.

A team of German researchers has used artificial intelligence to create a “self-aware” version of Super Mario who can respond to verbal commands and automatically play his own game.

The Mario Lives project was created by a team of researchers out of Germany’s University of Tübingen as part of the Association for the Advancement of Artificial Intelligence’s (AAAI) annual video competition.

The video depicts Mario’s newfound ability to learn from his surroundings and experiences, respond to questions in English and German and automatically react to “feelings.”

If Mario is hungry, for example, he collects coins. “When he’s curious he will explore his environment and autonomously gather knowledge about items he doesn’t know much about,” the video’s narrator explains.

The video also demonstrates Mario’s ability to learn from experience. When asked “What do you know about Goomba” — that’s Mario’s longtime enemy in the Super Mario series — Mario first responds “I do not know anything about it.”

But after Mario, responding to a voice command, jumps on Goomba and kills it, he is asked the question again. This time, he responds “If I jump on Goomba then it maybe dies.”

Source: Mashable

 

Why It’s Hot

This showcases a fun use of Artificial Intelligence, which typically is a little scary. This could have implications for expanded use and trust of AI, but for now it’s all in good fun and good tech.

 

Soothing robot in the doctor’s office

Going to the doctor can be a scary trip for children.  But a robot named MEDI can make the visit a little bit easier and less frightening.  Short for Medicine and Engineering Designing Intelligence, MEDI stays with the child through medical procedures, talking to them in one of 20 languages and offering soothing advice to get them through the visit.

Equipped with multiple cameras, facial recognition technology and the ability to speak directly to the little patients, MEDI is the product of Tanya Beran, a professor of community health sciences at the University of Calgary in Alberta.  Her team began developing MEDI three years ago and conducted a study of 57 children.  According to Yahoo Tech, “Each was randomly assigned a vaccination session with a nurse, who used the same standard procedures to dispense the medication. In some of those sessions, MEDi used cognitive-behavioral strategies to assuage the children as they got the shot. Afterward, children, parents, and nurses filled out surveys to estimate the pain and distress of the whole shebang.”

The result was that the kids who had MEDI by their side during the procedure reported less pain. Since that study, MEDI is being programmed for more serious procedures, such as chemotherapy to blood transfers to surgery.

Why it’s hot

Robotic technology is starting to come together with practical applications for people.  With motion, voice, the ability to recognize humans and interact with logical language patterns, MEDI is a natural step along the way to fully interactive robots, possibly even artificial intelligence.