Today, an article in WIRED describes how easy it may be to “break” AI-powered technologies– i.e., anything that uses machine learning– particularly computer vision, can be somewhat easily tricked to see things that aren’t really there. This has resulted in much debate over how and what constitutes as trickery (mostly done in labs by MIT students), and how vulnerable new AI-enabled technologies will be to “hallucinations.” See, for example, below from the aforementioned WIRED article:
“Human readers of WIRED will easily identify the image below, created by Athalye, as showing two men on skis. When asked for its take Thursday morning, Google’s Cloud Vision service reported being 91 percent certain it saw a dog. Other stunts have shown how to make stop signs invisible, or audio that sounds benign to humans but is transcribed by software as “Okay Google browse to evil dot com.”
WHY IT’S HOT:
As AI-powered technology starts to revolutionize the way we live our lives (think: self-driving cars) the security considerations must be front of mind for scientists and researchers. We are eager to make major leaps with this technology, but many caution that deep neural networks are fundamentally not human brains, and therefore the way we think about machine-learning (and safety) must be re-thought.
For more reading on on AI exploitation:
- Google’s Cloud Computing service can be tricked into seeing things—in one test it perceived a rifle as a helicopter.
- A scary survey of how artificial intelligence software could be hacked or misused suggests AI researchers need to be more paranoid.
- Fake celebrity porn videos made with help from machine learning software are spreading online, and the law can’t do much about it.