Apple unveiled their latest and greatest new 2019 iPhones in a Keynote event viewed by approximately 2 million people live last Tuesday.
What these viewers may have missed is how Deep Fusion will transform some types of photography.
The two main key features of the new IPhone 11 and IPhone 11 Pro models that sets them apart from last year’s iPhone Xs and Xr models are new camera lenses and A13 bionic processors. The iPhone 11, an upgrade of last year’s Xr model, now has a dual rear lens camera system — a 12 megapixel wide angle and a 12 megapixel ultra wide. I think most iPhone buyers will get more use out of the new ultra wide angle lens, however, I really love the telephoto lens on the iPhone Xs to zoom two times closer to subjects. The lack of the telephoto lens on the iPhone 11 camera will stop me from purchasing this more affordable iPhone.
The iPhone 11 Pro, an upgrade from last year’s iPhone Xs dual camera lens system, now also adds the ultra wide lens to give the Pro three 12 megapixel lenses: a wide angle, ultra wide angle, and telephoto. What really peaked my interest during the keynote is when Phil Schiller introduced Deep Fusion coming to the iPhone Pro later this fall. Schiller playfully referred to this new feature as “computational photography mad science”.
In Deep Fusion, the iPhone 11 Pro cameras takes 4 short and 4 long exposure images simultaneously and combines them into one image that is sharp throughout with very little noise.
The concept of Deep Fusion, however, is not a new feature to photographers. Deep Fusion is focus stacking — a computational algorithmic used in software like Adobe’s Photoshop that can combine several images together into a new improved image. Photographers focus stack images to bring both the background and foreground in focus in one image.
What is amazing about Deep Fusion is that I do not need a computer and Photoshop for focus stacking. Instead the IPhone Pro 11 can do these intensive CPU tasks within seconds. This speaks to the great progress and innovation the ARM chip designers are making at Apple. I am looking forward to trying out Deep Fusion images to see how they work with macrophotogrammetry — creating 3D models from images this fall.