The way our cameras process and represent images is changing in a subtle but fundamental way, shifting cameras from ‘capturing the moment’ to creating it with algorithmic computations.
Reporting about the camera on Google’s new Pixel 4 smartphone, Brian Chen of the New York Times writes:
“When you take a digital photo, you’re not actually shooting a photo anymore.
‘Most photos you take these days are not a photo where you click the photo and get one shot,’ said Ren Ng, a computer science professor at the University of California, Berkeley. ‘These days it takes a burst of images and computes all of that data into a final photograph.’
Computational photography has been around for years. One of the earliest forms was HDR, for high dynamic range, which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image.
Over the last few years, more sophisticated computational photography has rapidly improved the photos taken on our phones.”
This technology is evident in Google’s Night Sight, which is capable of capturing low-light photos without a flash.
Why it’s hot:
In a world where the veracity of photographs and videos is coming into question because of digital manipulation, it’s interesting that alteration is now baked in.