Researchers have generated imagery that can fool AI vision systems, like those on self-driving cars, into thinking they see something. While this technology has been around for a while, researchers at Google recently developed a method for printing these images on stickers.
Unlike other adversarial attacks, they don’t need to be tuned based on the image they’re trying to override, nor does it matter where they appear in the AI’s field of view. Here’s what it looks like in action, with a sticker that turns a banana into a toaster:
Although adversarial images can be disconcertingly effective, they’re not some super magic hack that works on every AI system every time. Patches like the one the Google researchers created take time and effort to generate, and usually access to the code of the vision systems they’re targeting. The problem, as research like this shows, is that these attacks are slowly getting more flexible and effective over time. Stickers might just be the start.
Why it’s hot
As we rely more on AI with access to vision systems to unlock our phones, drive our cars, open our doors, and more, vulnerabilities of such systems will become apparent. As will all emerging technology, there are risks of misuse and neglect, but there are also brilliant computer scientists and information security professionals working to keep us from living episodes of Black Mirror. The more we understand about their work, the safer we become and the easier their jobs become as well.