I can’t read you like a cake.

Amazon’s algorithm is reading a new emotion on faces: fear

Amazon’s Rekognition is a face detecting software that is supposed to know when you are feeling an emotion.

Technically, the algorithm works by learning how people’s faces usually look when they express fear. Then, when you show it a new image, it can tell you with a certain probability whether that person’s face is communicating the emotion of fear, leaving the human to then decide what to do with the information. The company’s descriptions of the product claim that “Amazon Rekognition detects emotions such as happy, sad, or surprise, and demographic information such as gender from facial images.”

Can we build something that humans don’t even know how to do.

Example, we built calculators, because we have figured out math and needed a quicker way to get passed tedious examples to tackle harder ones.

We rarely understand each other, and emotions are so variable for different people. What may seem like a flash of anger might be disgust for some or a twitch for another.

Answer: It’s not working, but Amazon is still selling it. It falsely matched 20% of California’s state legislatures photos. 20%! They’ve sold it to marketing companies, no surprise. But they are marketing to police forces, and immigration agencies. Where there could be a problem.

“In the past, civil rights groups, AI experts, and even some of Amazon’s own investors have asked the company to stop deploying its facial recognition technology given industry-wide issues with accuracy”

Ethically, how can someone buy half-baked technology? Why would you want half a baked pie?

 

 

Your Brain, and how we will tap into it.

How easy will journey/relationship maps be with this new Facebook mind reader?

Facebook is funding and assisting in a University of California San Francisco study aimed at giving people with serious brain injuries the chance to type words with only their minds. The research is showing real results and promise. Facebook hopes to use the underlying technology as a basis for a new brain-computer input mode for augmented-reality glasses that you wear all day.

If Facebook’s researchers can get their headset to accurately detect which neutrons are firing, they may be able to use algorithms based on the USCF research to map neurons to specific letters, then words the user is thinking of.

Facebook Envisions AR GLasses that read your thoughts isn’t just a dream

This is not the first time someone tries to map out a brain and how we process speech though! Here is a podcast about it here -> Freakonomics podcast on how your brain tells stories.

And here is a cool interactive 3-D Semantic Map that the Gallant institute created after creating the brain map that the podcast above details. Gallant Lab 3D Brain

Why it’s hot.

Besides being a cool look into what your brain does in your skull all the time. Ever since you were born. that you don’t have to put an ounce of energy into?

It brings the cliche, living in your own world, to reality. And because there are so many questions we can ask about the future, what that means for our brands, or about language and communication in general.

Weightless living, only connecting.

Questions: How much more personal do our messaging have to be to match this sort of personal 24/7 virtual reality? In how many ways do/can we understand language and meaning? What comes first the language or the meaning?- If you think its meaning, then what happens when you hear a word you don’t know? -If you think its language, what did you naturally feel for a caretaker before you knew the word for it? What would life be like if we could communicate via telepathy -can we communicate solely via meaning? If we could communicate only via meaning, would there ever be arguments? Will there be a group telepathy meeting?

And many more.