BusinessPhotography

Computational photography, how does artificial intelligence work in smartphones?

Click to take a picture on your mobile and the algorithms in one second create the best possible image in those conditions. It is an invisible reality, in the experience of using current smartphones; and it is also an increasingly frequent and important case: the weight of artificial intelligence is growing in the way we take pictures.

Although the term “computational photography” is now ancient, in technological terms, it is true that the algorithms are progressively affecting every generation of smartphones that comes out.
Lastly, the innovations introduced with the iPhone 11, where each photo is actually the collage of many shots obtained at different exposures. The final photo is made from the best of each shot.

A similar approach,
with some differences, was seen in Google Pixel 3 and improved in the current Pixel 4 .

In short, in the choice of a good smartphone for photos it weighs more and more also the artificial intelligence, present however now also in “popular” models, even if in a less important way.
Let’s look at the fundamental aspects to guide us towards the purchase, in this area. Let’s focus on the innovations that serve to make better shots (leaving out those for effects or content recognition in order to give information on these).

The basics of AI in smartphones.
Computational photography has also been used for reflexes for a long time ( red-eye removal, automatic shooting optimization based on the scene being detected … ), but it is essential for smartphones to compensate for the fact that the sensor is necessarily smaller. The developments of the last period are due to specific technologies, which are spreading in all high-end smartphones: a chip dedicated to the AI ​​and a neural engine that exploits the AI ​​machine learning (deep learning type) to improve the shot . In essence, the producers train the algorithm with many examples of labeled photos, so to make it more able to understand the situation and adapt the shot accordingly.

The software level can also be present on lower-end models (as is the case with Pixels), but at the moment the hardware of the more expensive products is needed to optimize the result. And certain more stringent interventions of the algorithm are unworkable on less performing hardware (chips). The photographic hardware, like lenses, obviously also affects the final result.

Hdr, “scene detection / recognition”.
Basically already in 2010 we saw the HDR (High dynamic range), now default in many models. The small smartphone sensors struggle with areas that are too light and too dark in the picture. The HDR tackles the problem by taking two or more photos at different brightness levels and then putting the images together in a single photo, which will then contain more details in the bright and dark spots . The latest hardware evolutions have made it possible to reduce the defects of this process, which risked giving an artificial appearance to the photos.

Scene detection is also very common, where the AI ​​recognizes the framed image and understands its contents; so it automatically adapts the balances, brightness and other aspects to make an optimal shot in those conditions. If it is a face, it can also “beautify” the facial features, make the skin smoother.

Author

Mary Pierce

Leave a Reply

Your email address will not be published. Required fields are marked *