Low light, extreme contrast images like this are normally very difficult to obtain with a smartphone, in this case Adobes' Lightroom HDR Raw (iPhone App) and some serious computational processes allow for results not far removed from a regular DSLR or Mirrorless camera.
Since the beginning of photography there have been several significant revolutions, the move from glass plates to film, the Kodak Brownie, the adoption of smaller formats (35mm mainly), colour slide and negative film, digital photography and the smartphone. The next frontier, already well underway, is computational imaging.
The old imaging methods, and by that I mean traditional digital methods, not film, are being replaced by processes and options which to many photographers appear to be almost witchcraft, possibly even photographic heresy.
In simple terms, we're now rapidly moving from photography methods where physics determined the results to one where computational (software) methods rule the roost. We still need good quality lenses and sensors, but we can now produce images that transcend the physical limitations of those items.
Most of my peers are no longer in the business; some found the move into the digital age too stressful, some didn’t want to learn anything new, many went broke. I've got the greatest respect and appreciation of all we have previously seen and done in photography but I firmly believe that Photographers needs to embrace these new developments, they are not going to go away.
The new options offered by computational imaging are fascinating, some have even the most adventurous and adaptable of us scratching our heads and wondering "is that such a great idea", however, the digital world won't stop just because we find it a wee bit challenging, so it’s a matter of getting up to date and with the program. Hopefully this article helps put a few things in perspective. I added some links at the end for those who want to dive a little deeper.
The new options offered by computational imaging are fascinating, some have even the most adventurous and adaptable of us scratching our heads and wondering "is that such a great idea", however, the digital world won't stop just because we find it a wee bit challenging, so it’s a matter of getting up to date and with the program. Hopefully this article helps put a few things in perspective. I added some links at the end for those who want to dive a little deeper.
Thinking About The Losses
Sadly, many of the skills I've developed over the last 40 plus years no longer count for much, if anything, so I appreciate the potential angst photographers associate with computational imaging, here’s my short personal list of lost methods and options, it may strike a chord with some of you.
Large format camera control? Mostly irrelevant and seriously who can afford to shoot LF these days, film is crazy expensive and the processing a pain to access, at least in my neck of the woods.
Fancy chemical concoctions for custom processing film and negs? Dangerous and messy and not very convenient.
Darkroom techniques to extract the more delicate details and tones in printing? Why bother, Photoshop and inkjet printing are the go now.
Neg duplication? Don't be stupid, "scan em to digital"!
Direct image restoration methods? Have you never heard of scanners and Photoshop!
Hand tinting? What the hell is that?
There's a lot more besides this little list, but all of these things once made me money, today using them would send me broke. As Bob Dylan said "the times, they are a changin", and I'd rather change with them, in fact, I generally enjoy the challenge. Perhaps a little personal history is in order to set the scene for where we have come from and perhaps going to.
A Little History
I adopted digital methods in around 1995, mainly out of necessity, I'd developed a severe allergy to most photo chemicals and subsequently became very ill, I even left professional photography for a while because of this, thus when the coming digital revolution gathered momentum in the mid-90s, I had no qualms in jumping on board.
At first, it was not a happy marriage, everything attached to digital methods was stupidly expensive and low on quality, most of the tools were downright primitive compared to what we today take for granted.
I've got a collection of early digital cameras to remind me of the shortfalls and where we have come from. Fortunately despite the doom-filled predictions by Luddites, by around 2004, digital had made several quantum leaps and most of the annoying issues had been resolved, the quality by then was more than adequate. The rate of development in that period was far quicker than predicted, especially between 2000 and 2003, this should be a salutary lesson for the current state of play in the “Smartphone versus DSLR” wrangle. Many photographers firmly maintain that smart phone will never be good enough, I'd suggest history tells us they are wrong.
The iPhone Revolution
The next giant leap for me was mobile photography, which commenced in mid-2007 with the release of the first iPhone in the US, we Aussies didn’t see the magical new device until June 26 2009.
My wife bought home the newfangled iPhone 3GS on day 1, I held out for a few weeks but soon realised I'd be left behind by Wendy if I failed to possess the magical new device.
Like the move to digital, it’s taken mobile imaging a decade or so to reach a stable level of maturity and sophistication. There's truly little correlation in the image quality of the first serious smartphone images I made in 2009 and those I can easily create on my current iPhone X; it's an absolute chalk and cheese comparison. Again the rate of development should be a significant lesson, especially when you consider the change in image quality for regular cameras over this period has been relatively minor.
The initial iPhone, (which we didn't get) had an inferior fixed focus 2-megapixel camera, taking photos was not high on the Apple agenda, pic quality was only marginally useful at best. The first 3GS model we saw in Australia sported a 3-megapixel camera, with better, though still poor performance, heck you couldn’t even control the exposure.
How times change, today the significant developments are almost all on the mobile imaging platform, these include the adaption of RAW capture, more open platforms, AI as applied to imaging, dual and triple sensor capture, depth engines, and far more. The imaging technology crammed into your average high-end smartphone is mind-boggling and truthfully very few people have a clear understanding of all the processes involved, most are just glad it works.
The rapid improvements will continue, they'll build upon one another, turbo-charged continuously by the work of thousands of highly creative software and app developers, rather than just the efforts of a few in-house hardware and software techs, which is the case with traditional camera makers.
The Next Step-Global Shutters
As I write this the new iPhone XS models sport a global shutter and we'll almost certainly see this within the next year on other iPhone and Samsung devices, it's also likely other upstart Chinese makers like Huwai will jump on board as well. Already camera testers are singing the praises this single change has made to the real world results from the iPhone XS cameras.
Regular cameras will, of course, follow suit with global shutter options, but the advantages will be less pronounced and regardless it will be still some time yet before they adopt the option as it requires a significant re-design and it's vastly harder and more expensive to implement with larger sensors.
The advantages of full global shutters are the speed of shooting, no physical noise, no shutter re-cocking, no distortion of the image, no moving parts to wear and a whole heap more. The critical point is that true global shutters enable vastly better implementation of computational processing methods.
Traditional Capture versus Computational Methods
Yep, it's traditional capture versus computational methods. With computational imaging, the processes are almost more important than the sensor and lens. Lenses and sensors are limited by physics where as computational methods are limited by battery power, processing speed and the imagination of the software developer which are all aspects smartphones have well and truly nailed.
Smartphone developers have no real choice but to go down this computational path, most of the low hanging fruit regarding sensors and lenses have already been pulled, the laws of physics are now significant limitations to further image quality improvements.
Batteries, processing power and algorithms, on the other hand, continue to develop at a rapid pace.
Traditional camera makers, of course, have much the same problems, but realistically all of the improvements in the last few years at the capture end have been mostly minor incremental changes that probably work better from a marketing aspect than in actual shooting performance, though these changes have kept their tills turning over.
For regular camera makers, the benefits of computational processes won't provide as much leverage, the hardware performance of their tools are already perfectly acceptable as far as most consumers are concerned. Smartphones on the other hand have much ground to gain by bringing their performance more "in line" with the current crop of DSLR and Mirrorless cameras.
For many consumers, regular cameras and smartphones are already line ball, but more serious shooters remain well aware of current smartphone imaging limitations.
Computational imaging has long been a reality, realistically all digital processing and editing are computational, but in this instance, we’re talking about far more powerful processes which change the capture processes and positively impact on our imaging options.
At a basic level, computational methods can work to simply improve image quality, but at the pointy end they may also create content that doesn't exist in the real world, it's the latter option that I, like many photographers find a bit confronting.
Just last year tech company Nvidia demonstrated a new tool/process that created new unique human faces using the input of real faces via machine learning and AI methods; it is both fascinating and creepy. More recently Nvidia has demonstrated “content aware” fill methods that make Adobes’ version look pretty lame, will this find its way to your smartphone? Probably.
The Google pixel achieves much of its stellar output not because of it amazing sensor or lens: as far as I can gather neither is anything truly special, but rather, via a very clever combination of machine learning and AI, so the future is already here!
What Computational Imaging Offers
Computational options may include the following options:
Image stacking to reduce noise, increase detail, remove objects in motion, increase the depth of field, increase dynamic range and more.
Use AI (artificial intelligence) to guess at missing content, remove or add objects, provide meaningful metadata, better handle exposure and colour issues, choose appropriate exposure settings.
Fusing the output from two or more separate lens/sensor modules to allow for better detail, lower noise or even zoom effects.
Machine learning to help build better AI implementations and customise your equipment to your specific uses.
Create VR (virtual reality) content or blend real content with virtual content.
Augmented reality for advertising.
Off device super computer processing to carry out highly complex processing that exceeds the capacity of the device.
Off device super computer processing to carry out highly complex processing that exceeds the capacity of the device.
Depth mapping to control the depth of field and also control selective editing processes, currently we are at level one, the sophistication will improve very quickly from this point onwards.
Image stabilisation processes that take into account more variables, even subject movement and direction.
White balance control using AI and up to date weather/location info. It will likely also include altitude, direction and possibly even a separate colour sensor module to read the light temperature itself.
Suggestions for framing and composition based on real time content.
Lighting effects simulation.
Intelligent Image stitching using AI and machine learning
Now that's a big list and no doubt there are other options I've not even mentioned or perhaps considered. What you may have realised is that most of these things already exist in one form or another, so none of this is wishful thinking, you may have also note that with few exceptions none of these are easily applied to regular cameras in their current state.
What About The Hardware
Most computational processes require changes to the hardware; including additional lenses and sensors, different sensor designs, infrared sensors, global shutters, lidar type options and more. Many of these hardware factors are beyond regular DSLR/Mirrorless camera technologies, they'd require radical re-designs to implement and most likely very different form factors, for example, many processes require more than one lens, though as said, Google have performed a few miracles with the single lens Pixel smartphones!
Most likely consumers would resist regular cameras with radically different forms, they've certainly done so in the past, but mobile devices with their small form factors are far more likely to get away with extra lenses, sensors and other bits and pieces.
Connectivity Is An Issue
One common thread in many of these computational processes is the need for connectivity to the web, which kind of rules out regular cameras as we know them.
Many photographers still argue that connectivity is irrelevant to them, perhaps so, but my take is that photography is a form of communication, it's about telling stories and anything that gets in the way of this goal will limit the usefulness of the device.
Regular cameras may not need internet connections, but they at least should be able to transfer their images seamlessly to your mobile devices, most still can't even do this reliably without messy workarounds, some are just plain infuriating and firmware/software updates are often still ridiculously messy.
Meanwhile back in the "real" camera world, manufactures, and photo forum lurkers are expending considerable energy arguing about the clash of Mirrorless versus DSLR technology, something I find rather quaint in the light of current developments. Currently, the big news which has them excited is Nikon and Canon's new mirrorless cameras, which are only about 5 years too late...ho hum. I'm not denying these cameras are very good devices.....but really...what's truly new or revolutionary about them.
A recent article in DP review, involving interviews with camera manufactures about the Mirrorless developments at the CP+ garnered almost 1200 often profoundly hostile responses in just 24hrs from DSLR adherents, clearly many folk have entrenched positions where equipment is concerned. You'd have to be very brave, stupid, or wearing flameproof clothing to venture down the pathway of the comparing mobile imaging with regular cameras on these often highly polarised sites.
But...I reckon these folk are dead wrong......
Seriously, for many users the war is over, traditional camera makers have lost the mass market, the funeral is a long way off but unless they radically change their business models, they’re all due for even more severe pruning of both income and profits. Their lunch and probably afternoon tea as well is now being eaten by Apple, Samsung and probably quite a few “late to the table” Chinese upstarts. Forget about the sideshow of DSLR versus Mirrorless debate fisticuffs, for many consumers that's just a mere distraction from the real imaging action and ultimately it's probably a case of regular camera makers needing to follow smartphone makers.
What's Next?
I'll make a couple of comments that I feel have relevance to photographers wondering what they should do or perhaps buy next.
Sad to say, the DSLR, in particular, is a technological backwater at the fag end of its developmental life, it surprises me it lasted this long. Most of the DSLR releases from the past 3 years have been tepid at best, none have shown any great innovation, the only reason sales remain robust is that it takes a long time for "Joe Public" to change his ways, that, and that low-end DSLRs are now crazy cheap.
DSLRs have remained the "long-term default purchase", but the market is now moving very quickly towards Mirrorless and all the traditional DSLR benefits are now lost to the relentless innovation in the Mirrorless camp. Even Nikon and Canon have finally leapt into mirrorless market, that should tell you something!
Mirrorless cameras now focus as quickly and accurately as DSLRs, they have better and more useful viewfinders and plenty of lens options. It's hard to make a solid argument that’s not steeped in teary-eyed nostalgia for our old DSLR friend. Though, as an owner of a full frame DSLR and 13 or so lenses plus a whole raft of accessories, I well understand why some would want to argue the pro DSLR case. I still use my now 10-year-old FF DSLR for paying work, but there's no way I would replace it with a new DSLR.
We’ve reached a point where the mirror, pentaprism and other needed mechanical bits are impeding development or preventing it altogether.
With all that said the combo of a modern mirrorless camera and a new smart phone might just be the killer combination that will fill the needs of most non-professional shooters.
Increasingly I've found that I can go on holidays and cover all my needs with my M4/3 mirrorless camera with tele zoom lens and my iPhone shooting in RAW, this combo gives me superb flexibility, low weight with results that are easily good enough, I certainly don't feel I'm missing out in any way. Even more importantly with a few choice apps on the iPhone I can delve deeply into some very creative aspects of photography and edit to my hearts contents. What's not to love?
So Why The Angst Over Mobile Photography?
I suspect the concerns among photographers relate to the erosion of the value of their skills, fear of new options that are difficult to understand, the cost of changing equipment and perhaps the idea that image quality will be suspect.
However,...... deep down, it’s most likely just the idea that all our hard-won methods, skills, abilities and expensive equipment will be rendered obsolete by an influx of the "great photographic unwashed" with their soulless universal photographic devices.
New smartphone options have created imaging options we could only dream of a few years ago, creatively it's never been a better time to be a photographer, if these exciting new options bring new people into the fold who have more artistic rather than technical capability, well, I'm okay with that. Importantly the additional options and ease of use may free up some longer-term shooters enough to allow them to explore aspects they’ve until now thought “off limits", it certainly did for me.
For my part I have gained more enjoyment and creativity from my mobile photography than I ever did with either my film or regular photography.
The real challenge is that the skill-set needed to navigate and implement these new options is different and in some cases involves concepts without parallel in traditional image making. Consider, the change to digital altered the tools of capture and how we edited our images yet it was not truly difficult to grasp because nothing fundamentally changed, even the terminology stubbornly remained the same in most cases. Computational imaging on the other hand radically changes the tools, and more importantly, it radically alters the methods, process and possibilities.
It's a confronting challenge, many traditional photographers when looking at the work of other shooters still ask the hoary old "what did you shoot that with" question, in the future that question will be meaningless and already should be. The idea that “all you need for success in shooting” is to buy "XYZ" camera and lens will finally bite the dust, and not before time either.
Once They Were Technocrats
Photography was once the domain of the technocrats, many had a good degree of artistic flair but more importantly they had access to an expensive arsenal of tools that average consumers did not. Now and in the foreseeable future, it will be more the domain of the creatives, though it will always be the case that good technical knowledge and skills will continue to help enormously.
In the not distant past, the technological difficulties (especially in the film era) were so overbearing that technical skills and gear often truly mattered more than art. Once upon a time the differences in lens quality were enormous and tied tightly to price, the differences between the output of cheap compact camera and DSLR was a chasm, the differences between 110 and medium format film beyond any meaningful comparison.
In the past money did indeed equal results from a quality perspective, which was a big win for camera makers wanting to push aspiring photographers up the equipment ladder.
Today the high-end gear is better than ever, but the low end has improved much more in comparison, image quality sufficiency is now achieved at a vastly lower relative price point, the benefits of the most expensive equipment only being realised within a minimal set of circumstances that just don't matter to most consumers.
Today the high-end gear is better than ever, but the low end has improved much more in comparison, image quality sufficiency is now achieved at a vastly lower relative price point, the benefits of the most expensive equipment only being realised within a minimal set of circumstances that just don't matter to most consumers.
Not long ago it was a minor miracle to get an image correctly exposed, focused and framed, those factors alone carried colossal brownie points, now they're aspects taken for granted. Artistry and content are what remains for photographers to struggle with, which for most of us presents a far greater challenge, which can often only be met through very extensive experience and time.
Photography as a Core Literacy
New computational image developments have further driven photography down the path of being a core literacy. As the capture tools become increasingly irrelevant, lighting, composition, message, and content take their rightful place as the differentiators between image makers.
Photography has become a mass communication method no longer encumbered by high cost and inconvenience, a picture was once said to speak a thousand words, now it's entire volumes. So where are we now on the continuum? Well, the technical hurdles have been primarily knocked down, the cost of entry is incredibly low, but the bar of artistry has raised enormously.
Smartphones have a considerable head start in facilitating photography as a literacy, they're spearheading the current technological race as precocious upstarts that in the space of a just a decade have changed mass market photography probably by as much as the box brownie did all those years ago. The processing power of the modern smart-phone is enormous, even compared to modern desktops and laptops and additionally, they have full connectivity to allow them to leverage off supercomputer processing and many other connected options, yet just 10 years ago todays smartphone would have seemed to be just a tech fantasy
All of the above means that the smartphone has become a natural way of communicating, it's now the visual typewriter, it allows us to say things in ways we could only dream of in the past. Here lies the core issue, visual communication is no longer a novelty, it's common place and cheap but the flip-side is that for an image or video to gain traction it has to have very good content and message, great composition and a bit of pizzaz, it is no longer sufficient that it just be technically competent.
Computational processes ultimately do three things, make the quality better, which sorts out the technical competence bit, make it easier to get results under problem shooting situations and increases the array of creative options available. All of these aspects expand the communication potential of the device and right now regular cameras are being quickly chased by smart phones and in many situations have been made completely redundant.
Comparing The Computational Options
So for photographers rather than casual consumers where is all this computational mumbo jumbo going?
First, for comparison let's consider the currently accepted advantages of traditional DSLRs and Mirrorless, they have...
Depth of Field control via the choice of aperture and lens type.
Lower image noise in all situations due to vastly larger sensors.
Higher resolution, currently 16-50mp or so.
Superior low light performance, especially for the full frame versions.
Easy telephoto lens options.
Now are any of these advantages unassailable by modern smart-phones? No, all can or will eventually be dealt with via computational imaging methods and hardware developments, and while the current implementations may be less than perfect, the rate of development is extremely rapid.
The last 12 months alone has seen enormous improvements, it's like the manufactures have found top gear and switched on the nitrous oxide. Even the portrait mode options on iPhones have leapt ahead via alternative apps and to be honest most casual shooters were fine with the results from the initial product releases. Will serious shooters ever be happy with the “portrait” results, yes, I am sure they will, though prejudice will get in the way for some time yet.
Smartphones also have core advantages over regular cameras.........
They're always in your pocket, something not to be under-rated!
Smartphone screens are vastly better than virtually any traditional camera screen, which can make composition and playback far more pleasurable, so long as you can see the screen in the first place.
Smartphones have connectivity to the net at all times.
There are vast "in phone" editing options for both raw and compressed files which easily exceed anything on regular cameras.
Smartphone ease of use when you just can't be bothered and want to shoot in auto is pretty amazing and consistent.
A smartphone tends not to intimidate your subject or users for that matter.
There are still clear disadvantages, and of course, many traditional photographers are super keen to point these out.
Smartphone low light performance still sucks if the situation gets dire.
The ergonomics are just horrible; they’re slippery and hard to hold steady, it's often a case of form limiting function.
Lens options are not great, most "add-on lenses" are terrible and the attachment methods are suspect at best, and there's no universal standard for lens attachment in the first place.
Screens despite being terrific can be tough to see in bright sun.
To pull the advantages and disadvantages apart let's see where the truth in 2018 sits.
Top drawer smartphone lenses already resolve at extreme levels, taking sensor size into account the performance of the better lenses are excellent in all measurable ways, including chromatic aberration, vignetting, cross-field clarity and contrast. My tests with DNG files have shown some are considerably better than almost any regular camera lens.
As smartphone manufacturers add additional lens/sensor modules, the resolution gap between regular cameras with their multiplicity of lens options, and smart-phones narrows.
Traditional photographers often get huffy and dismissive, dealing out the old “Smart Phone image quality is poor" card. Perhaps the differences are much less evident than they believe, the improvements from one model smartphone to the next is usually quite profound, and the image quality difference between say an iPhone 6 and XS series is enormous. Smartphones are usually kept for at least a couple of years, so many photographers likely have no current yardstick to compare to.
At this point, I'll mention that old chestnut of a counter argument many photographers use in refutation of smartphones as viable cameras. "Buying a new phone every two years makes it an expensive camera". I find this argument disingenuous at best; you hardly buy a smartphone to just use the camera, surely you'd use the net, make calls, keep a calendar, create notes and all that other stuff, the camera is a very handy bonus.
When you take into account all the device does, the value factor is very high, and for many people, it might mean you don't need actually to spend money on a traditional camera at all! (That's, of course, the last thing Canon and Nikon want to hear, but it's no doubt music to Apple and Samsungs' ears.)
The Zoom Future
Most serious shooters believe that smartphones are no good for sport and birding, true enough, but this is a relatively small subset of most peoples needs. Fear not, the folded zoom smartphone is now on the horizon.
Folded zooms will probably give you at least 125 to 150mm equivalent, still not great for sport or birds but better than what you have now and no doubt that range will extend with time. However, for at least the next few years you'll undoubtedly need a DSLR or Mirrorless camera if you want real telephoto reach. In the interim cameras like the brilliant fixed lens Sony RX10 mk4 seem all the more sensible and appealing.
Combining folded zooms with various types computational methods however may actually extend the zoom range well beyond say the 150mm equivalent mark, it wouldn't surprise me if in another 6 or so years we have smartphones that can offer say 400mm equivalent in our pockets!
Combining folded zooms with various types computational methods however may actually extend the zoom range well beyond say the 150mm equivalent mark, it wouldn't surprise me if in another 6 or so years we have smartphones that can offer say 400mm equivalent in our pockets!
Poor Quality in Low Light?
Again true, but the performance of smartphones is quickly improving and not all that bad. While many regular cameras offer excellent results in low light, users still often struggle with getting sharp focus, adequate depth of field and eliminating camera movement. However,......if you want to shoot seriously low light situations or capture star-fields, well then, you can forget any current smartphone option.
The ace up the smartphones sleeve is the now common use of image stacking methods to reduce noise in increase quality, whilst quality is still not as good as having a large sensor, the results using new apps and computational methods are becoming quite acceptable. The pics of the church interiors I have used in this article are a good demo, they were shot using the Adobe DNG HDR function in Lightroom Mobile, the scenes have enormous contrast and low light levels but the results are really quite acceptable and would print fine.
Focus Issues?
This is just not true today, many phones use both phase and contrast methods, lenses can focus very quickly due to the low mass involved, I'd say we are just one generation short of seamlessly good focus across all situations.
Focus issues still occur in very low light or where there is insufficient contrast, but many regular cameras struggle under the same circumstances, I could name several DSLRs that give up the ghost as soon as contrast falls off and many that have especially difficulty when in live view.
Not for portraits?
This is not strictly true today, and the situation is rapidly getting better.
Some newer smartphones have tele lenses in the 70mm equivalent range which with a little cropping is close to perfect for most closely framed portrait needs, but even so, great portraits can be shot in the 50 to 70 mm equivalent range.
The real change factor here are the new-fangled portrait modes that simulate depth of field and lighting effects, which leads us to the old.....DOF is hopeless with smartphones argument. Look, I honestly think only a few photos need super shallow DOF to visually work, in some ways that shallow DOF look is a crutch for those unable to, well, compose the shot, I certainly feel it's used as a default way too often.
I don't think it is critical to really simulate that "shot at f1.2 look" but some decent depth of field control is desirable, and DOF simulation is THE major frontier of development at present. Many photographers appear to have an ethical issue with simulated DOF, but in the long term, it should be possible to simulate the look of pretty much any lens and aperture setting and who cares how that's achieved, so long as the images look pleasing.
It is even possible to now shoot in DNG and get post capture aperture setting with the right apps on an iPhone X, but just imagine where we will be given another 2 years development.
A portrait of my wife on her 58th birthday, taken in portrait mode on my iPhone X, sure it's not perfect but most people would be more than happy with the result.
Resolution
As for resolution, currently, the default seems to be around 12 to 16 megapixels, enough for any sensible print size and already way more than 1 to 2 megapixels needed for social media and web use.
There are a few outliers in the mobile world with higher resolutions, but the benefits have proven marginal due to smaller pixel sizes and lens limitations. A point to note is that to double the resolution from the current 12-16 megapixels, you'd need to go to about 64 megapixels. The real world difference between 16 and say 24 or even 30mp is not that great regarding print size possibilities.
I feel that most of the bad press for smartphone resolution is the result of stupidly high levels of JPEG compression and noise reduction, something that becomes immediately obvious if you shoot in RAW/DNG on any late-model phone. DNG, is a game changer when you want fine detail, though despite it being an option for a while it's only recently consumers and even serious photographers have become aware of the option and its impact on mobile image making.
DNG and Small Sensors
My experience is that smaller sensors benefit more from RAW/DNG than larger ones. At the pointy end of mobile imaging, RAW makes a huge difference and is not difficult to use. Smartphone users have less hassle with DNG/RAW than regular cameras users do because the conversions can be done to perfection on the device. Note though, you have to have an alternative camera app on many smartphones to access RAW capability, for example the standard iPhone camera app still does not offer DNG/RAW capture.
Some regular cameras offer “in camera DNG/RAW conversion” but usually it's clunky and minimal, few photographers would bother with it, the two methods cannot be compared.
Increasingly, better capture apps are adding RAW/DNG, though truthfully none as yet offer the perfect implementation, but the improvements are coming thick and fast, weekly at the moment.
Appropriately processed raw files can look quite analogue due to the noise characteristics and saturation mapping if exposure is held back at low ISOs, but even at higher ISOs the improvement in detail offered by shooting RAW is extraordinary. Generally I find that Phone makers attempt to eliminate all image noise if possible within the compressed files, but this trades of detail and texture, with RAW files you can tune noise to taste and in real usage a little noise is not a bad thing and leaves the image looking more organic.
I imagine that we'll soon see the combination of DNG/RAW and smart in and off-phone computational processes to push the quality to even higher levels and provide extra flexibility, the possibilities really excite me.
I imagine that we'll soon see the combination of DNG/RAW and smart in and off-phone computational processes to push the quality to even higher levels and provide extra flexibility, the possibilities really excite me.
The hardware, of course still plays a significant part in quality and makers have pushed the physics of smartphone sensors and lenses a long way in the past 3 years but most of the big improvements have been related more to processing and computational methods.
Shadow Noise and Quality
The limit on image quality for smartphones isn't 'highlight' rendering, as some photographers probably assume, after all most of us hate that bleached out highlight look. Rather it's shadow noise that knocks the stuffing out of image quality.
Consider this, if you reduce the noise at the shadow end you can cut back exposure to control highlights and thus get good tonality. Image stacking methods makes it much easier to obtain low noise levels and certainly native shadow noise is vastly improved on the current crop of smartphone sensors compared to those of just a generation or two ago.
Consider this, if you reduce the noise at the shadow end you can cut back exposure to control highlights and thus get good tonality. Image stacking methods makes it much easier to obtain low noise levels and certainly native shadow noise is vastly improved on the current crop of smartphone sensors compared to those of just a generation or two ago.
There are many pathways to crack this shadow noise nut, we could:
Image stack a series of identically exposed images, none of which clip the highlight tones.
Use HDR methods with 2, 3 or more images exposed at different shutter speeds.
Fuse the outputs of several lens/camera modules that are identical but exposed without any highlight clipping.
Fuse the outputs of several lens/camera modules that are not identical.
Fuse the outputs of two camera lens modules, one for colour information and one for monochrome information.
Implement a sensor that has variable ISO at the pixel level.
Moreover, there are other ways as well, the point is, regardless of the method used all involve significant computational methods to create an end output and all are far easier to implement with small sensor smartphones than larger sensor DSLR and Mirrorless cameras.
Various players in the field have already implemented all of the above methods, most are not yet fully ready for a full frontal prime-time DSLR attack, but you can be sure that they soon will be.
Various players in the field have already implemented all of the above methods, most are not yet fully ready for a full frontal prime-time DSLR attack, but you can be sure that they soon will be.
Other direct hardware options could include even deeper pixel wells, pixel binning from higher resolution sensors, more precise control of variable pixel quality across the sensor, precise pixel mapping for the whole sensor, wider apertures, (f1.4 should be possible with current technology), a combination of mono and colour sensors for better capture.
Ultimately I reckon all current perceived low light/quality limitations of smartphone cameras will be solved by a combination of hardware (especially global shutter options) and software changes that use computational methods.
Going further into the future, we could have switchable pixel filters, capturing RGB and luminosity in ultra quick succession and vastly improved stabilisation options that use lens and sensor along with delay options to out do Olympus. The latter would allow us to use longer exposures and thus lower ISO settings more easily. (Provided subject movement was not an issue, and even then there are well established computational solutions to help with that too.
Bit Depth
With the base level image noise reduced, it's likely we'll see higher bit depths for the RAW capture, currently, the base noise level negates the benefits of higher bit depths. Higher bit depth capture should reduce the banding we sometimes seen in yellows and skies even with RAW files, it's an issue that has remained a PIA for many serious mobile shooters, a fix would be most welcome.
Summing Up
In the end, it's just much easier to implement advanced computational options when the device has a small lens and sensor (or multiple sensors), ultra fast shutters, powerful processors and constant net connectivity.
The current target is really about bringing smartphone "image quality and look" up to the level of DSLRs and Mirrorless cameras without the downsides of complexity and weight because that's what the mass market wants and will pay a premium for. Of course many of theses processes can be applied to regular cameras (some already are) but ultimately the cost is prohibitively higher and the benefits far less beneficial to end users.
Finally, even now for a vast number of users, the image quality from the latest smartphones is entirely sufficient, many consumers instead want better ergonomics and flexibility for general shooting needs. I for one, imagine some sort of standardised accessory lens mount and high-end lenses to match would be a winner judging by the number of people I come across struggling with the current offerings.
Most smartphone shooters are not too worried about shooting sport, extreme low light, star-fields or professional jobs and those people who are own other cameras for those purposes, but, if their future smartphone could do a passable job of those tasks, that'd be rather nice.
The smartphone is currently not the answer to all our photographic needs, but increasingly with computational imaging options, it's becoming the answer to a broader array of them and along the way the improvements open up a whole array of new creative possibilities.
In the end, aside from some possible ethical considerations computational methods can only be a good development for world of photography, the future is exciting.
https://www.youtube.com/watch?v=Gk7FWH12WLI
Video on the visual core used in the Google Pixel
https://blog.halide.cam/iphone-xs-why-its-a-whole-new-camera-ddf9780d714c
Article on computational photography as applied to the iPhone XS
https://gearburn.com/2016/07/smartphone-computational-photography/
Short article on computational photography processes from 2016, we've come a way since then.
Links you may like to try to dive a little deeper:
https://www.youtube.com/watch?v=Gk7FWH12WLI
Video on the visual core used in the Google Pixel
https://blog.halide.cam/iphone-xs-why-its-a-whole-new-camera-ddf9780d714c
Article on computational photography as applied to the iPhone XS
https://gearburn.com/2016/07/smartphone-computational-photography/
Short article on computational photography processes from 2016, we've come a way since then.
No comments:
Post a Comment