Face erase function on the Google Pixel – Bestgamingpro

The Pixel 6 and 6 Pro have a new feature called Face Unblur. It relies on Google’s machine learning algorithms to ensure that when you take a photo of a moving subject, the face is not out of focus.

It does this by capturing photos from both main and wide angle lenses at the same time and stitching them together to create images with crisp detail.

Erasing a face with Unblur is based on machine learning.

Take a look at your phone’s photo gallery and you’ll likely find at least a few images where the person you’re trying to photograph is moving too fast, resulting in a blurry face.

Google’s Face Unblur feature aims to fix this problem on the Pixel 6 and 6 Pro.

This feature is based on custom Tensor hardware and machine learning technology from Google.

When it detects that someone is moving too fast, it takes a photo simultaneously from the main lens and the wide angle lens, stitching them together automatically. Here’s how Google explains it:

Pixel 6 and Pixel 6 Pro simultaneously take a darker but sharper photo on the ultra wide camera and a brighter but blurrier photo on the main camera. Google Tensor then uses machine learning to automatically combine the two images, giving you a well-exposed photo with a sharp face.

To sum up, the standard 50 MP camera on the Pixels usually tunes to a higher ISO sensitivity and slower shutter speed in auto mode, resulting in bright images full of detail but poor for bright objects. movement.

This is why, when you try to take pictures of your children while they are playing, their faces often appear blurry.

Google’s strategy is to use both goals; the camera detects the subject and decides if it is moving too fast for the primary lens before taking a photo.

If it determines that it is, it instantly switches to Face Out of Focus mode, so that when you take a photo, the camera uses both the main lens and the wide-angle lens to produce two images. .

Face Unblur works perfectly well, and it’s automatic – you don’t have to do anything.

The main lens focuses on the most important details while using a short exposure time, while the wide angle shot is taken at low ISO and high shutter speed.

Because it was captured at a fast shutter speed, the wide-angle photo retains the clarity of the subject’s face even as it moves, allowing Google to use its machine learning technology to combine them.

The majority of the information in the full photo came from the original lens, with facial data being provided by the wide-angle lens.

It is not difficult for Google’s ML algorithms to combine these photos; after all, Google has been using facial recognition for almost a decade.

When a subject is in motion, the Pixel 6 and 6 Pro create a photo with all the elements and a clear face. The best part is that everything is done automatically.

There is no need to enable Face Unblur or make any other settings while shooting. The Pixel 6 and 6 Pro are now among the best Android phones available, thanks to Google’s use of machine learning technology.

Source link