Smartphone camera hardware is getting interesting. From the Samsung Galaxy S23 Ultra’s 200-megapixel primary camera to the Xioami 13 Ultra, which sports a variable aperture over a one-inch sensor, it’s an exciting time to be a mobile photography geek. But even in the best Android phones, computing power and software prowess play as big a role in getting great photos from our pocket-size cameras as hardware does. Smartphones employ what’s called computational photography to compensate for their comparatively meager hardware. What most phone cameras lack in brawn compared to dedicated camera hardware, they make up for with clever computing.
Computational photography: the basics
It’s hard to put a fine point on such a broad term, but computational photography is, in a nutshell, what it sounds like. Photography is enhanced through computer power. While every digital camera technically employs software to make images, in photos taken using computational photography techniques, software plays a big part in how the final product looks before the photographer applies ready-made filters or manual post-processing after the fact.
Assuming you take photos with your phone, you’re likely familiar with computational photography in some fashion. Practically everything your phone’s camera does (such as low-light modes, portrait blur, and scene recognition) relies on computational photography to automatically shape the image you get when you tap the shutter button.
Applications of computational photography
There are different use cases for computational photography, from your phone’s camera recognizing it’s pointed at a sunset to controversial beauty filters in social media apps. Here are a few common applications.
HDR and low-light modes
You’ve probably seen the visualizations smartphone makers use to illustrate HDR photography. These are stacks of individual frames merging automatically into one single image that looks better than the many images it’s made from. This is a common application of computational photography. While it’s possible to manually take photos at multiple exposures and merge them into one high-dynamic-range image, our phones can do it with the tap of a single button, thanks to ever-increasing mobile computing power and clever software.
As Google explained in 2021, the process of creating an HDR shot on a smartphone involves quickly capturing several under-exposed frames to preserve highlight details, then combining the information from those frames with a longer exposure to preserve the scene’s dynamic range. It does this by highlighting details from the underexposed shots and shadowing details from the longer exposure. Pixel phones’ Night Sight low-light mode works similarly but requires longer exposure times to capture details in dark scenes.
Augmented reality
Augmented reality (AR) is the real-time layering of additional graphics over your physical surroundings. In the context of smartphones, that means recognizing environments as seen through camera sensors and realistically layering things on top of those environments on your phone’s screen.
There are tons of commonplace applications for AR: filters that superimpose effects that react to real-life objects in the frame in social media apps like TikTok and Snapchat, augmented reality characters and effects in games like Pokémon Go, and simulating new home goods in your space with shopping apps like Amazon and Wayfair. These use cases rely on your phone being able to interpret what it sees through its cameras and to react to that input in real time through some pretty complex computation.
We have a list of great apps to take AR photos if you feel like augmenting your own reality.
Portrait mode
The artificial blur imposed by portrait modes is another everyday example of computational photography. When snapping a photo in portrait mode, your device calculates where and to what extent to apply the blur effect in the frame to naturally mimic the depth-of-field look cameras with large sensors and wide apertures produce under similar conditions.
Implementations vary, but Google went into great detail about how it honed the portrait mode experience on the Pixel 6 series in 2022. The software model that works out which parts of a portrait mode photo should appear in focus was trained on photos taken of different people by a spherical array of hundreds of individual cameras and depth sensors in conjunction with in-the-wild photos taken on Pixel phones. When you snap a portrait mode photo on your Pixel, the phone feeds the photo and a quick-and-dirty “mask” of what it thinks should be in focus through that model, ideally resulting in a convincing portrait-style photograph, background blur, and more.
Software zoom
Digital zoom, at its most basic, is cropping an image to make the part you want to zoom in on appear larger in the frame. If you’ve done this yourself after the fact, you’ll know that the results can be less than spectacular, especially when dealing with a relatively small image (like the 12-or-so-megapixel photos most phones take by default). But through computational photography, software zoom can produce sharper, more lifelike images.
On the Pixel 7 Pro, Google’s Fusion Zoom algorithm, illustrated above, “aligns and merges images from multiple cameras, ensuring that your photos still look great when they’re taken in between your main camera and telephoto camera.” So, at any magnification between 1x and 5x. At zoom levels of 20x magnification or higher, the Pixel 7 Pro leans on a machine learning upscaler “that includes a neural network to enhance the detail of your photos,” helping make sense of details in the far distance.
Samsung recently caused a stir when folks caught onto the fact that super-telephoto shots taken on the company’s high-end phones lean heavily on computational photography to recreate details on the face of the moon. Whether the resultant photos are “real” is up for debate. However, they couldn’t exist without computational photography techniques. Phone cameras can’t resolve detail hundreds of thousands of miles away on their own.
More mobile photography
We only scratched the surface here, but hopefully, this has given you a better idea of what computational photography is. Interested in diving deeper into mobile photography? Check out our guides on the best Android apps for photographers and how to take and edit raw photos on Android.