Disney is using new deep learning software to analyze movie-goers’ facial expressions and gauge how much they’re enjoying a film.
The innovation within the new system is an algorithm that Disney and Caltech call factorised variational autoencoders (FVAEs), which use deep learning technology to automatically turn facial expressions into numerical data, and is able to incorporate metadata.
Combining the FVAE algorithm with infrared cameras, Disney can analyse the facial expressions of moviegoers in a cinema as they react to what they’re being shown on screen. With enough information, the new technology can even predict how an audience member will react to upcoming scenes after just 10 minutes of observation.
Why It’s Hot
- Technology could be used to tailor a film to an audience in real time, bringing in a new aspect of personalization to cinema
- Data gathered and analyzed can be funneled into other developing AI systems where picking up cues from their body language to be able to better assist (e.g. robot babysitters)
- Raises the question of how this will impact the movies we end up being exposed to with this AI now acting as the gatekeeper between us and the next Sharknado