MIT goes into the deep end with a fake fish

MIT researchers created a robotic fish in order to study sea life. It was modeled after a real fish in order to blend in with the sea life.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” says Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

Story on Popular Mechanics

Why It’s Hot

Hopefully the next gen SoFi’s will be able to go deeper into the ocean and explore things out of the reach of humans.

the camera doesn’t lie, but the algorithm might…

Algorithms fooling algorithms may be one of the most 21st century things to happen yet. But, it did. Researchers at MIT used an algorithm to 3D print versions of a model object, programming them to be recognized as certain other things by Google’s image recognition technology. In short, they fooled Google image recognition into thinking a 3D printed stuffed turtle was a rifle. They also made a 3D printed stuffed baseball appear to be espresso, and a picture of a cat appear to be guacamole. Technology truly is magic.

Their explanation:

“We do this using a new algorithm for reliably producing adversarial examples that cause targeted misclassification under transformations like blur, rotation, zoom, or translation, and we use it to generate both 2D printouts and 3D models that fool a standard neural network at any angle.

It’s actually not just that they’re avoiding correct categorization — they’re classified as a chosen adversarial class, so we could have turned them into anything else if we had wanted to. The rifle and espresso classes were chosen uniformly at random.”

Why it’s hot:
Clearly there are implications for the practicality of image recognition. If they can do this fairly easily in a lab setting, what’s to stop anyone with enough technical savvy from doing this in the real world, perhaps reversing the case and disguising a rifle as a stuffed turtle to get through an artificially intelligent, image recognition technology-driven security checkpoint? Another scary implication mentioned was self-driving cars. It just shows we need much more ethical hacking to plan for and prevent these kind of security concerns.

Slack AI says maybe you need a mid-afternoon snack…

Slack CEO Stewart Butterfield recently spoke to MIT Technology Review about the ways the company plans to use AI to keep people from feeling overwhelmed with data. Some interesting tidbits from the interview…


When asked about goals for Slack’s AI research group, Butterfield pointed to search. “You could imagine an always-on virtual chief of staff who reads every single message in Slack and then synthesizes all that information based on your preferences, which it has learned about over time. And with implicit and explicit feedback from you, it would recommend a small number of things that seem most important at the time.”

When asked what else the AI group was researching, Butterfield answered Organizational Insights. “I would—and I think everyone would—like to have a private version of a report that looks at things like: Do you talk to men differently than you talk to women? Do you talk to superiors differently than you talk to subordinates? Do you use different types of language in public vs. private? In what conversations are you more aggressive, and in what conversations are you more kind? If it turns out you tend to be accommodating, kind, and energetic in the mornings, and short-tempered and impatient in the afternoon, then maybe you need to have a midafternoon snack.”

Read more at MIT Technology Review.

Why It’s Hot
The idea of analyzing organizational conversation to learn about and solve collaboration and productivity issues is incredibly intriguing – and as always with these things, something to keep an eye on to ensure the power is used for good.