Wednesday, 21 November 2018

Fragments of Your Imagination







My last article on this "BPB" covered the concept of shooting a little bit wider and cropping to reveal an array of alternative composition options, well this time I want to zoom in a lot closer and create images from small sections of the image, fragments in other words.

Years ago it occurred to me that often very small parts of the image can offer up material for interesting abstract or painterly images, often these crops were less than 1/2 megapixel. Yep, it sounds crazy, but with the right upscaling methods these little fragments can offer another string for your creative bow.

You can combine the fragments with fragments from other images as well, maybe overlay them out of registration, grain them up, and more.  There's no limit to what you can do and you'll have some pixel-happy, trippy fun along the way.

It's not a process for those folk wedded to the ideal of using their full frame cameras to seek out maximum and perfect image quality, you folk might find all of this an affront to the senses, no, this is definitely for those looking for a more relaxed and less than pixel perfect interpretation.  If you hate blur and image noise, best you leave now and check out one of my other blog articles.

Up front, it's not that easy to find images that lend themselves to this method, but I offer you three tips to kickstart the process.  

First, you're not looking for image parts that provide literal, high-quality crops, rather you're looking for segments that possess a painterly rather than photographic look to them. 

Next, perhaps surprisingly, you'll find images from low res cameras and smartphones tend to work best, especially those shot in RAW.

Third, the texture of the file has a big bearing on how well this all works, if you're not sure what I mean, zoom way in on some JPEG images from different cameras, you'll soon realise they have quite distinct textural characteristics.  RAW files, however, give the most creative freedom because you can arrive at different textural renderings by using various interpolation algorithms and processing methods.

With practice, I found I could recognise the likely contenders a little more efficiently, but really, I still need to open them up on screen in a RAW converter to get a solid handle on the potentialities.  Just so you know, you can open JPEGs in your Raw converter as well and while you cannot re-process them, you can certainly fine tune them and interpolate them to larger sizes using different algorithms and sharpening/blur methods, application dependent of course. Iridient Developer on the Mac is an excellent example of a suitable app for this, Lightroom not so much, as you have no choice over the processing methods used.

I found from experience that the best way to pick out fragments was to create a smallish square crop box, zoom in on the pic, then move the box around until I fall over something that looks half interesting.  Once the fragment is isolated, I can fine-tune the image for an optimal result for that specific section. With fragments editing, again, I'm not going for a literal photographic look, so the adjustments/settings I use are often rather radical. Chillax guys, it will all work out I say, just experiment. 

Fragments are then exported out and saved as TIFF or Photoshop files, resizing them to something more useful along the way, which often means a 200 to 400 % upscaling.

I have a folder on my desktop computer for collecting fragments, I just pop em in there as I find them, and just maybe, later on, I'll further processes them or combine them with something else. Maybe they'll get flipped and flopped, stretched, mirrored or just serve as a vehicle for a little downtime fun.

I've never gone out and shot images specifically to extract fragments from them, but that could well be a fun and purposeful approach, for now, it's just a matter of looking for happy little accidents.

So what makes a useful fragment, technically speaking, I mean?

Well, images that are over-sharpened usually look pretty ugly at the fragment level, likewise poor and in particular over-exposed images lack tone and texture when given the big blow-up.  

You might get away with some under-exposed fragments. Of course, they'll look noisy but  that can be dealt with, and in most cases, I add noise to the fragments anyway, so I just roll with it.

On the other hand, fragments derived from low ISO full-frame images don't often gel well because, well, they are just too darned clean. Clean fragments just end up looking like they've been blown up rather too much, rather than possessing a sort of arty or creative character, but, anything is worth a try I say.

Surprisingly perhaps I have found many a contender from high ISO raw files taken by my iPhones! By that I mean those pics shot between 200 and 400 iso, which is high for a smartphone.  Also, again surprisingly, the latest smartphones with all their fancy schmancy computational methods produce compressed files that are just too smooth to work well, but your mileage may vary. Some of the best fragments in my collection came from my ancient iPhone 3GS!

Most of the editing could easily be done on a mobile platform, but I usually use Photoshop CC on the desktop.  Here's a few tips I can pop your way that might bear fruit:

1) You know those rubbish paint filters in Photoshop, yeah they really need some updating, but they can be applied to fragments with great success, I've used most of them at one time or another so I won't pick favourites, but you can always create multiple layers and try a few different options then compare the results and maybe even blend them.

2) Adding noise is great, embrace the noise, there are a few ways to do this and all give subtly different results.

3) There's this wonderful option in the layers area of Photoshop, it's called blend modes....take it to the bank...you need those blend modes to get real creative control.  Experiment, you won't break anything.

4) Forget about normal colour balances, think cinematic colour, blue it up, warm it up, twist the hues, pop that saturation, c'mon live on the wild side.

5) You absolutely, positively will need combos of blur and sharpen filtering, but not just basic "hit it with the simple unsharp mask" method, again try wild, crazy settings in USM or better still, use the High-Pass filter options. And blur filters with blend modes..... oh, there's some real magic to be had there my friends.

That's enough of me, let's have a look a few samples that I've whipped up using different methods and cameras and hopefully it'll get your creative juices flowing.




Fragment of DNG file from iPhone 6S Plus, less than 1mp, various paint filters, blur, noise, high pass and blends. Imagine this one on canvas about 30 inches wide.





Fragment of DNG file from iPhone 6S Plus, around 1.2 mp, find edges and texture filters, blur, noise, high pass and various blends. Again a great contender for canvas printing.





Fragment of RAW file from Sony NEX 5n Plus, approx 1.5 mp, find edges and texture filters, blur, noise, high pass, hue shift and various blend modes.





Fragment of RAW file from Sony NEX 5n, approx 1.5 mp, find edges and drawing filters, blur, noise, high pass, hue shift and various blend modes. This one looks a bit like a saturated water-colour in real life, it is quite yummy.




Fragment of JPG file from iPhone 3GS,  approx 0.4 mp, find edges and texture filters, blur, multiple noise filters, high pass and various blends, hue shift.  Actual print is 24" wide on canvas!





Fragment of RAW file from Sony NEX 5n Plus, taken at high ISO, multiple image overlays, donor crops approx 1 mp.  Find edges and drawing filters, blur, noise, high pass, hue shift and various blend modes.





Fragment of RAW file from iPhone X DNG , approx 1 mp, find edges and drawing filters, blur, grain/noise, high pass, hue shift and various blend modes. I really like the semi abstract feel of this.





Fragment of RAW file from Sony NEX 5n, taken at high ISO,  approx 1 mp, image flip with copy/paste, blur, noise, high pass, hue shift and various blend modes.





Fragment of RAW file from Sony NEX 5n Plus, mirrored fragment, approx 1 mp, find edges and drawing filters, blur, noise, high pass, hue shift and various blend modes.


So there you go, now go off and have some fun, oh, and by the way, should you decide to go out and deliberately "shoot for fragments" I'd love to hear from you and see what you come up.

Happy Shooting.









Wednesday, 31 October 2018

Post Cropping




Near Pejar Dam NSW


How often have you stood in front of a scene thinking, “I know there is a photo in there somewhere, but I just can’t work out exactly where?

Many scenes are elaborate, presenting multiple framing options and working out the optimum composition can be frustratingly difficult, I’m sure all photographers have struggled with this at some time, often we are just spoilt for choice.

A few of months back I took a “one on one client” to some of my favourite local spots to explore some compositional options and practice a few technical concepts.  As usual, I used my iPhone to demonstrate the finer compositional points, I find it excellent for the task because the screen eats your average camera alive and it’s just so easy to pinch in and out on the image and move around the frame when explaining compositional aspects. 

As is usual these days, I shot the sample frames in RAW/DNG, just in case I wanted to do something with them later on.

Back in the office, looking at one of the samples it struck me that maybe it could be a handy vehicle for demonstrating the idea that “within an image, there may be several other images just bursting to escape from the greater frame and wander off by themselves”, and so we come to this post.

The resolution of modern cameras and even smartphones are now so good you can crop mercilessly and still get a perfectly usable image, especially if you shoot RAW and are prepared to do a little tweaking with the RAW converter and Photoshop.  

Now to be clear, I’m not suggesting that you should get all lazy and slack and work this way all the time, but maybe this post will inspire you to look at some of your past work with a new eye. You may also decide to use the post-shot crop as an option for those times when you “just can’t quite sort the composition out” in the field.

First a few tech bits on the sample image. The “full image” is a small panorama, I needed a little extra width in the frame and due to physical constraints couldn’t move back any further. 

It was shot on an iPhone 10 using DNG in ProCamera.  

Settings?  ISO 20, 1/800 sec, 28mm at f1.8, Uni WB.

Editing? Processed in Iridient Developer using a custom profile then outputted at around 25 mp then stitched and edited in Photoshop to give a final file of just a whisker under 30mp. (Like I said a very small panorama)

Crops were then taken from the 30mp image and downsized for the web to give what you see here.

Just in case your wondering. No, the 30mp files don’t look quite as detailed as you’d expect from a regular camera with a native resolution of 30mp, but it’s probably much closer than you expect it to be. The iPhone 10 has an excellent lens with even edge to edge performance so if you optimally exposure in RAW/DNG the resulting images up-rez very well.

Anyhow, let’s get to the pics. This set is by no means the limit of possible crops, I found several other options as well, but I had to stop at some point to keep this article mildly manageable.

All of these, bar the last frame are just crops of the full image, I haven’t done any cloning or selective edits to the cropped frames, but ideally, for real use, I would.

Frame 1 at the top of the page is the complete panorama image.  It’s taken at Pejar Dam, about 20 km from Goulburn NSW, we’re currently in a drought, the colours are an accurate representation of the then current winter tones.  

I’m pretty happy with the composition but the burnt dead tree on the left of the frame concerns me, it's a tad ugly.  I’ve three options here, accept it as the reality of the scene, clone it out, or find another cropping.  In truth that burnt dead tree is typical of the Aussie bush and part of the story, photographers have choices; there are no hard and fast rules unless the image is for documentary/reportage use.


Basalt rocks and boulders near Pejar Creek NSW crop 1


Frame 2 (above) is a heavy crop; it represents about 35% of the original frame; still, it came out at around 3000 by 2800 px so plenty big enough for an A4 or even an A3 taking viewing distance into account.  This crop emphasises walk-through depth due to the implied leading line over the top of the rocks, it’s a much simpler composition than the full image and is much easier to read, I quite like it.


Pejar Creek NSW, Landscape pic with Rocks and Tree crop 2


Frame 3 (above) is probably the crop I least like, but it demonstrates a point.  I do like the robust vertical approach, but the partial tree on the left annoys, it’d be better if it were either not there, or if more of it was included.  Two problems: cropping off the tree on the right of the trunk would leave an enormous amount of unsupported foliage in place within the sky and alternatively keeping more of the tree would also introduce some of the annoying dead tree to the left of it.

So here’s the dilemma, in both cases the issues can be easily corrected by cloning, but it comes down to whether you think that’s an acceptable approach. I’ve no issue with cloning, so long the pics not being passed off as the real, unadulterated thing.

The takeaway point, you can often create nice images via crops out of larger frames when combined with a little cloning, provided the whole original image provides suitable content which can be placed seamlessly into the cloned areas.  In this case, it’s straightforward to remove the remnant foliage should I crop the tree out since there’s an enormous area of cloudy sky available which can be blended into that area. Likewise keeping more of the tree would not be a problem as the dead tree to the left would hardly be a challenge to remove via cloning.



Pejar Creek NSW Landscape, Rocks and Boulders crop 4


On to frame 4 (above) . This image is similar to frame 2 but moved further to the left to exclude the rock with the tree growing out of the top; it also places a bit more space around the rock formations on the left of the frame.  

I quite like this version, the foreground rock now dominates the frame, and the receding sizes of the other rocks give a strong sense of depth.  Additionally, we have an implied receding line that runs from the top of the foreground rock across the top of the rocks behind and ending at the tree to the left of the frame, which all combined strengthens the 3D feel.

Additionally, there’s a subtle interplay between the large rock at the top of the frame and large foreground rock, they balance each other and prompt the eye to oscillate between them, again increasing the sense of 3D.

Ideally, I’d clone out the foliage on the upper left of the frame and probably the bit of dead stick between the two foreground rocks, to tidy things up, even the most clone adverse photographers should have no issues with those minor changes.



Pejar Creek NSW Landscape, Rocks and Boulders crop 5, vertical compostion


Frame 5 was an unexpected option; I didn’t see it when I shot the pic. I love the feel, and it makes me want to go walkies between the rocks and venture in behind the main rock to take a little peek at that dead tree.  The partial foreground rock adds depth, and a sense of perspective plus I like the little bit of dark sky on the top right of the frame which helps to..well…frame the image.

Again there are cloning adjustments that could be applied to tidy the image up, nothing big mind, but I’d probably remove that rock on the edge of the upper left of the frame, or I could crop in a little more on the left.  Ideally, I would darken the bottom left corner of the frame to balance the upper right and emphasise the composition.  Generally, I reckon this cropping has legs, hence I created another version of it, frame 8 to demo these very points.


Pejar Creek NSW Landscape, Rocks and Boulders crop 6


Frame 6 (above) is a bit of an oddity, but I think it shows potential.  First, it contains a lot more information than the other crops so needs to be presented as a larger-scale image to avoid appearing a bit confused.  I’ve seen plenty of Australian Landscape paintings with similar approaches presented in large scale; they provide the impression that you are looking through a window into the great outdoors.

Allowing more space in the vertical to include the top of the rock on the right side would be nice, but creates a problem with random foliage from the lower levels of the Eucalypt tree, again, easily resolved if you’re not clone adverse.  Additionally, I could include a little more at the bottom of the frame to provide a bit more negative space around the foreground rock.


Pejar Creek NSW Landscape, Rocks, Boulders and Old Gum crop 7


On to frame 7 (above) , all I’ve done here is crop off the right 25% of the image.  I feel the dead tree/Eucalyptus combo work far better in this instance.  There’s a delicate balance between the rocks on the right and the trees and the sky to foreground ratio is good. Overall it’s quite a satisfying version, and with some tonal tweaking and minor cloning to tidy up some ugly temporal bits, like the dead twigs on the tree that could fall off any day, it'd make a lovely subtle print.
  
I was surprised that I didn’t see this version when I shot the pic, I think the rock formations to the right of the full original image dominated my initial perception of the scene, blindsiding me.



Pejar Creek NSW, Landscape, Rocks, Boulders and Hidden Tree, edited crop, strong vertical composition.


Finally, frame 8 is a re-cropped and modified version of frame 5 to show how a little cloning and selective editing can be applied to improve the result, bearing in mind the image represents around just 20% of the original photo the result is rather satisfying.



So here are 15 tips you can we take away from all this.



An image may have several alternative compositions available that aren’t obvious at the time of capture.

Most modern cameras easily have enough resolution to allow for major cropping yet still produce images that look fine, especially for web/online use, but as an example, just 6mp enlarges to a great A4 or maybe even A3 if the editing is good enough.

Even smartphones, when shot in RAW/DNG, have a significant degree of crop-ability, remember this sample is an iPhone RAW image which started out natively as a slightly stitched 14mp image.

Shooting your regular camera in RAW and using appropriate processing methods significantly improves your image crop-ability, if you’re going to heavily crop, interpolate the image upwards in the RAW converter.

You could shoot with a wide-angle lens to allow for a greater array of crop options in post, but I’d advise the use an excellent lens with solid resolution across the entire image area, it's probably also a good idea to keep very steady or use a tripod.

Despite what you may think, ultra wide lenses are not ideal tools for post-shot cropping, once you get away from the central portion of the frame, distortion can make the cropped frame look a little odd, and that can be hard or impossible to sort out.  I find a 24mm equivalent or longer lens works fine.

Some folk may have objections to the shoot then crop approach, feeling “you should get it right when you shoot it”, Ok that's cool. But hey, what’s the difference if you found the perfect composition when shooting or in post-shoot, it’s still your work and your decision. In any case, not everyone is blessed with a wide array of lenses to cover all framing possibilities.

If you’re prepared to accept cloning as an option, you’ll significantly expand your cropping options. 

Differing crops require different fine tunings; these might include dodging, burning, depth of field simulation, colour tweaking and more, generally to save time you do all these things post-crop.

If you use an application like Photoshop to edit the pics, you can also explore the options of non-constrained image re-sizing and content aware resizing to further improve the final results obtained from your crops.

Some of the crops can produce very unexpected but positive results, so don’t hold back on trying everything out, even crops that might initially seem silly could pay off.

In many cases tiny changes to the crop can make very significant changes to the overall composition, it may be a good idea to revisit the pics with fresh eyes in a day or two.

Don’t get hung up on producing images that fit a particular aspect ratio, if you’re going to print the image you can always use a custom matt or frame.

Don’t get too hung up on shallow depth of field looks when shooting, you can likely get whatever look you want via DOF simulation in post-crop editing, remember, you can always blur details but you cannot put details in that were not rendered at the time of capture.

When shooting, make sure that your image has full tonal range rendering across the entire frame, small areas of clipped colour/detail are perhaps acceptable in the full image, but they’ll likely prove jarring in a heavily cropped version.

Finally, just to remind you, I’m not suggesting you take a relaxed approach to your framing and composition, post-cropping is just another tool in the shed. In the end, isn’t it great to know we have compositional options?  

Just to finish I have included a monochrome shot below taken on the same site.

Happy shooting and happy cropping.


Pejar Creek NSW Landscape, Boulders and Blackberries, monochrome, square format




Mobile Imaging and Computational Photography



Low light, extreme contrast images like this are normally very difficult to obtain with a smartphone, in this case Adobes' Lightroom HDR Raw (iPhone App) and some serious computational processes allow for results not far removed from a regular DSLR or Mirrorless camera.


Note: This article is a summary of a presentation initially given to the Canberra Photographic Society in April 2018 and then later to the Canberra U3A Camera Club in July 2018.  I've made a few small additions since the initial presentation to reflect mobile imaging changes in the past few months.

Since the beginning of photography there have been several significant revolutions, the move from glass plates to film, the Kodak Brownie, the adoption of smaller formats (35mm mainly), colour slide and negative film, digital photography and the smartphone.  The next frontier, already well underway, is computational imaging.

The old imaging methods, and by that I mean traditional digital methods, not film, are being replaced by processes and options which to many photographers appear to be almost witchcraft, possibly even photographic heresy.

In simple terms, we're now rapidly moving from photography methods where physics determined the results to one where computational (software) methods rule the roost.  We still need good quality lenses and sensors, but we can now produce images that transcend the physical limitations of those items.

Most of my peers are no longer in the business; some found the move into the digital age too stressful, some didn’t want to learn anything new, many went broke.  I've got the greatest respect and appreciation of all we have previously seen and done in photography but I firmly believe that Photographers needs to embrace these new developments, they are not going to go away.  

The new options offered by computational imaging are fascinating, some have even the most adventurous and adaptable of us scratching our heads and wondering "is that such a great idea", however, the digital world won't stop just because we find it a wee bit challenging, so it’s a matter of getting up to date and with the program. Hopefully this article helps put a few things in perspective.  I added some links at the end for those who want to dive a little deeper.


Thinking About The Losses

Sadly, many of the skills I've developed over the last 40 plus years no longer count for much, if anything, so I appreciate the potential angst photographers associate with computational imaging, here’s my short personal list of lost methods and options, it may strike a chord with some of you.

Large format camera control? Mostly irrelevant and seriously who can afford to shoot LF these days, film is crazy expensive and the processing a pain to access, at least in my neck of the woods.

Fancy chemical concoctions for custom processing film and negs?  Dangerous and messy and not very convenient.

Darkroom techniques to extract the more delicate details and tones in printing? Why bother, Photoshop and inkjet printing are the go now.

Neg duplication?  Don't be stupid, "scan em to digital"!

Direct image restoration methods? Have you never heard of scanners and Photoshop!

Hand tinting?  What the hell is that?

There's a lot more besides this little list, but all of these things once made me money, today using them would send me broke. As Bob Dylan said "the times, they are a changin", and I'd rather change with them, in fact, I generally enjoy the challenge. Perhaps a little personal history is in order to set the scene for where we have come from and perhaps going to. 


A Little History


I adopted digital methods in around 1995, mainly out of necessity, I'd developed a severe allergy to most photo chemicals and subsequently became very ill,  I even left professional photography for a while because of this, thus when the coming digital revolution gathered momentum in the mid-90s, I had no qualms in jumping on board.
   
At first, it was not a happy marriage, everything attached to digital methods was stupidly expensive and low on quality, most of the tools were downright primitive compared to what we today take for granted.

I've got a collection of early digital cameras to remind me of the shortfalls and where we have come from. Fortunately despite the doom-filled predictions by Luddites, by around 2004, digital had made several quantum leaps and most of the annoying issues had been resolved, the quality by then was more than adequate.  The rate of development in that period was far quicker than predicted, especially between 2000 and 2003, this should be a salutary lesson for the current state of play in the “Smartphone versus DSLR” wrangle. Many photographers firmly maintain that smart phone will never be good enough, I'd suggest history tells us they are wrong.


The iPhone Revolution


The next giant leap for me was mobile photography, which commenced in mid-2007 with the release of the first iPhone in the US, we Aussies didn’t see the magical new device until June 26 2009. 
  
My wife bought home the newfangled iPhone 3GS on day 1,  I held out for a few weeks but soon realised I'd be left behind by Wendy if I failed to possess the magical new device.

Like the move to digital, it’s taken mobile imaging a decade or so to reach a stable level of maturity and sophistication. There's truly little correlation in the image quality of the first serious smartphone images I made in 2009 and those I can easily create on my current iPhone X; it's an absolute chalk and cheese comparison.  Again the rate of development should be a significant lesson, especially when you consider the change in image quality for regular cameras over this period has been relatively minor.

The initial iPhone, (which we didn't get) had an inferior fixed focus 2-megapixel camera, taking photos was not high on the Apple agenda, pic quality was only marginally useful at best.  The first 3GS model we saw in Australia sported a 3-megapixel camera, with better, though still poor performance, heck you couldn’t even control the exposure. 

How times change, today the significant developments are almost all on the mobile imaging platform, these include the adaption of RAW capture, more open platforms, AI as applied to imaging, dual and triple sensor capture, depth engines, and far more. The imaging technology crammed into your average high-end smartphone is mind-boggling and truthfully very few people have a clear understanding of all the processes involved, most are just glad it works.

The rapid improvements will continue, they'll build upon one another, turbo-charged continuously by the work of thousands of highly creative software and app developers, rather than just the efforts of a few in-house hardware and software techs, which is the case with traditional camera makers.


The Next Step-Global Shutters


As I write this the new iPhone XS models sport a global shutter and we'll almost certainly see this within the next year on other iPhone and Samsung devices, it's also likely other upstart Chinese makers like Huwai will jump on board as well. Already camera testers are singing the praises this single change has made to the real world results from the iPhone XS cameras.

Regular cameras will, of course, follow suit with global shutter options, but the advantages will be less pronounced and regardless it will be still some time yet before they adopt the option as it requires a significant re-design and it's vastly harder and more expensive to implement with larger sensors.

The advantages of full global shutters are the speed of shooting, no physical noise, no shutter re-cocking, no distortion of the image, no moving parts to wear and a whole heap more.  The critical point is that true global shutters enable vastly better implementation of computational processing methods.








Traditional Capture versus Computational Methods


Yep, it's traditional capture versus computational methods. With computational imaging, the processes are almost more important than the sensor and lens. Lenses and sensors are limited by physics where as computational methods are limited by battery power, processing speed and the imagination of the software developer which are all aspects smartphones have well and truly nailed. 

Smartphone developers have no real choice but to go down this computational path, most of the low hanging fruit regarding sensors and lenses have already been pulled, the laws of physics are now significant limitations to further image quality improvements.  

Batteries, processing power and algorithms, on the other hand, continue to develop at a rapid pace.

Traditional camera makers, of course, have much the same problems, but realistically all of the improvements in the last few years at the capture end have been mostly minor incremental changes that probably work better from a marketing aspect than in actual shooting performance, though these changes have kept their tills turning over.

For regular camera makers, the benefits of computational processes won't provide as much leverage, the hardware performance of their tools are already perfectly acceptable as far as most consumers are concerned. Smartphones on the other hand have much ground to gain by bringing their performance more "in line" with the current crop of DSLR and Mirrorless cameras.  

For many consumers, regular cameras and smartphones are already line ball, but more serious shooters remain well aware of current smartphone imaging limitations. 

Computational imaging has long been a reality, realistically all digital processing and editing are computational, but in this instance, we’re talking about far more powerful processes which change the capture processes and positively impact on our imaging options.

At a basic level, computational methods can work to simply improve image quality, but at the pointy end they may also create content that doesn't exist in the real world, it's the latter option that I, like many photographers find a bit confronting.

Just last year tech company Nvidia demonstrated a new tool/process that created new unique human faces using the input of real faces via machine learning and AI methods; it is both fascinating and creepy.  More recently Nvidia has demonstrated “content aware” fill methods that make Adobes’ version look pretty lame, will this find its way to your smartphone? Probably.

The Google pixel achieves much of its stellar output not because of it amazing sensor or lens: as far as I can gather neither is anything truly special, but rather, via a very clever combination of machine learning and AI, so the future is already here!


What Computational Imaging Offers 


Computational options may include the following options:

Image stacking to reduce noise, increase detail, remove objects in motion, increase the depth of field, increase dynamic range and more.

Use AI (artificial intelligence) to guess at missing content, remove or add objects, provide meaningful metadata, better handle exposure and colour issues, choose appropriate exposure settings.

Fusing the output from two or more separate lens/sensor modules to allow for better detail, lower noise or even zoom effects.

Machine learning to help build better AI implementations and customise your equipment to your specific uses.

Create VR (virtual reality) content or blend real content with virtual content.

Augmented reality for advertising. 

Off device super computer processing to carry out highly complex processing that exceeds the capacity of the device.

Depth mapping to control the depth of field and also control selective editing processes, currently we are at level one, the sophistication will improve very quickly from this point onwards.

Image stabilisation processes that take into account more variables, even subject movement and direction.

White balance control using AI and up to date weather/location info. It will likely also include altitude, direction and possibly even a separate colour sensor module to read the light temperature itself.

Suggestions for framing and composition based on real time content.

Lighting effects simulation.

Intelligent Image stitching using AI and machine learning

Now that's a big list and no doubt there are other options I've not even mentioned or perhaps considered. What you may have realised is that most of these things already exist in one form or another, so none of this is wishful thinking, you may have also note that with few exceptions none of these are easily applied to regular cameras in their current state.



What About The Hardware


Most computational processes require changes to the hardware; including additional lenses and sensors, different sensor designs, infrared sensors, global shutters, lidar type options and more. Many of these hardware factors are beyond regular DSLR/Mirrorless camera technologies, they'd require radical re-designs to implement and most likely very different form factors, for example, many processes require more than one lens, though as said, Google have performed a few miracles with the single lens Pixel smartphones!  

Most likely consumers would resist regular cameras with radically different forms, they've certainly done so in the past, but mobile devices with their small form factors are far more likely to get away with extra lenses, sensors and other bits and pieces. 


Connectivity Is An Issue


One common thread in many of these computational processes is the need for connectivity to the web, which kind of rules out regular cameras as we know them.

Many photographers still argue that connectivity is irrelevant to them, perhaps so, but my take is that photography is a form of communication, it's about telling stories and anything that gets in the way of this goal will limit the usefulness of the device.

Regular cameras may not need internet connections, but they at least should be able to transfer their images seamlessly to your mobile devices, most still can't even do this reliably without messy workarounds, some are just plain infuriating and firmware/software updates are often still ridiculously messy.

Meanwhile back in the "real" camera world, manufactures, and photo forum lurkers are expending considerable energy arguing about the clash of Mirrorless versus DSLR technology, something I find rather quaint in the light of current developments. Currently, the big news which has them excited is Nikon and Canon's new mirrorless cameras, which are only about 5 years too late...ho hum. I'm not denying these cameras are very good devices.....but really...what's truly new or revolutionary about them.

A recent article in DP review, involving interviews with camera manufactures about the Mirrorless developments at the CP+ garnered almost 1200 often profoundly hostile responses in just 24hrs from DSLR adherents, clearly many folk have entrenched positions where equipment is concerned.  You'd have to be very brave, stupid, or wearing flameproof clothing to venture down the pathway of the comparing mobile imaging with regular cameras on these often highly polarised sites.

But...I reckon these folk are dead wrong......

Seriously, for many users the war is over, traditional camera makers have lost the mass market,  the funeral is a long way off but unless they radically change their business models, they’re all due for even more severe pruning of both income and profits. Their lunch and probably afternoon tea as well is now being eaten by Apple, Samsung and probably quite a few “late to the table” Chinese upstarts. Forget about the sideshow of DSLR versus Mirrorless debate fisticuffs, for many consumers that's just a mere distraction from the real imaging action and ultimately it's probably a case of regular camera makers needing to follow smartphone makers.


What's Next?


I'll make a couple of comments that I feel have relevance to photographers wondering what they should do or perhaps buy next.  

Sad to say, the DSLR, in particular, is a technological backwater at the fag end of its developmental life, it surprises me it lasted this long.  Most of the DSLR releases from the past 3 years have been tepid at best, none have shown any great innovation, the only reason sales remain robust is that it takes a long time for "Joe Public" to change his ways, that, and that low-end DSLRs are now crazy cheap.

DSLRs have remained the "long-term default purchase", but the market is now moving very quickly towards Mirrorless and all the traditional DSLR benefits are now lost to the relentless innovation in the Mirrorless camp. Even Nikon and Canon have finally leapt into mirrorless market, that should tell you something!

Mirrorless cameras now focus as quickly and accurately as DSLRs, they have better and more useful viewfinders and plenty of lens options. It's hard to make a solid argument that’s not steeped in teary-eyed nostalgia for our old DSLR friend.  Though, as an owner of a full frame DSLR and 13 or so lenses plus a whole raft of accessories, I well understand why some would want to argue the pro DSLR case. I still use my now 10-year-old FF DSLR for paying work, but there's no way I would replace it with a new DSLR.

We’ve reached a point where the mirror, pentaprism and other needed mechanical bits are impeding development or preventing it altogether.

With all that said the combo of a modern mirrorless camera and a new smart phone might just be the killer combination that will fill the needs of most non-professional shooters.

Increasingly I've found that I can go on holidays and cover all my needs with my M4/3 mirrorless camera with tele zoom lens and my iPhone shooting in RAW, this combo gives me superb flexibility, low weight with results that are easily good enough, I certainly don't feel I'm missing out in any way.  Even more importantly with a few choice apps on the iPhone I can delve deeply into some very creative aspects of photography and edit to my hearts contents.  What's not to love?


So Why The Angst Over Mobile Photography?


I suspect the concerns among photographers relate to the erosion of the value of their skills, fear of new options that are difficult to understand, the cost of changing equipment and perhaps the idea that image quality will be suspect.

However,...... deep down, it’s most likely just the idea that all our hard-won methods, skills, abilities and expensive equipment will be rendered obsolete by an influx of the "great photographic unwashed" with their soulless universal photographic devices.

New smartphone options have created imaging options we could only dream of a few years ago, creatively it's never been a better time to be a photographer, if these exciting new options bring new people into the fold who have more artistic rather than technical capability, well,  I'm okay with that. Importantly the additional options and ease of use may free up some longer-term shooters enough to allow them to explore aspects they’ve until now thought “off limits", it certainly did for me.

For my part I have gained more enjoyment and creativity from my mobile photography than I ever did with either my film or regular photography. 

The real challenge is that the skill-set needed to navigate and implement these new options is different and in some cases involves concepts without parallel in traditional image making. Consider, the change to digital altered the tools of capture and how we edited our images yet it was not truly difficult to grasp because nothing fundamentally changed, even the terminology stubbornly remained the same in most cases. Computational imaging on the other hand radically changes the tools, and more importantly, it radically alters the methods, process and possibilities.

It's a confronting challenge, many traditional photographers when looking at the work of other shooters still ask the hoary old "what did you shoot that with" question, in the future that question will be meaningless and already should be.  The idea that “all you need for success in shooting” is to buy "XYZ" camera and lens will finally bite the dust, and not before time either.



Once They Were Technocrats


Photography was once the domain of the technocrats, many had a good degree of artistic flair but more importantly they had access to an expensive arsenal of tools that average consumers did not. Now and in the foreseeable future, it will be more the domain of the creatives, though it will always be the case that good technical knowledge and skills will continue to help enormously.

In the not distant past, the technological difficulties (especially in the film era) were so overbearing that technical skills and gear often truly mattered more than art. Once upon a time the differences in lens quality were enormous and tied tightly to price, the differences between the output of cheap compact camera and DSLR was a chasm, the differences between 110 and medium format film beyond any meaningful comparison.
  
In the past money did indeed equal results from a quality perspective, which was a big win for camera makers wanting to push aspiring photographers up the equipment ladder. 

Today the high-end gear is better than ever, but the low end has improved much more in comparison, image quality sufficiency is now achieved at a vastly lower relative price point, the benefits of the most expensive equipment only being realised within a minimal set of circumstances that just don't matter to most consumers.

Not long ago it was a minor miracle to get an image correctly exposed, focused and framed, those factors alone carried colossal brownie points, now they're aspects taken for granted. Artistry and content are what remains for photographers to struggle with, which for most of us presents a far greater challenge, which can often only be met through very extensive experience and time.






Photography as a Core Literacy


New computational image developments have further driven photography down the path of being a core literacy.  As the capture tools become increasingly irrelevant, lighting, composition, message, and content take their rightful place as the differentiators between image makers. 

Photography has become a mass communication method no longer encumbered by high cost and inconvenience, a picture was once said to speak a thousand words, now it's entire volumes. So where are we now on the continuum? Well, the technical hurdles have been primarily knocked down, the cost of entry is incredibly low, but the bar of artistry has raised enormously.

Smartphones have a considerable head start in facilitating photography as a literacy, they're spearheading the current technological race as precocious upstarts that in the space of a just a decade have changed mass market photography probably by as much as the box brownie did all those years ago. The processing power of the modern smart-phone is enormous, even compared to modern desktops and laptops and additionally, they have full connectivity to allow them to leverage off supercomputer processing and many other connected options, yet just 10 years ago todays smartphone would have seemed to be just a tech fantasy

All of the above means that the smartphone has become a natural way of communicating, it's now the visual typewriter, it allows us to say things in ways we could only dream of in the past. Here lies the core issue, visual communication is no longer a novelty, it's common place and cheap but the flip-side is that for an image or video to gain traction it has to have very good content and message, great composition and a bit of pizzaz, it is no longer sufficient that it just be technically competent. 

Computational processes ultimately do three things, make the quality better, which sorts out the technical competence bit,  make it easier to get results under problem shooting situations and increases the array of creative options available. All of these aspects expand the communication potential of the device and right now regular cameras are being quickly chased by smart phones and in many situations have been made completely redundant.


Comparing The Computational Options


So for photographers rather than casual consumers where is all this computational mumbo jumbo going?

First, for comparison let's consider the currently accepted advantages of traditional DSLRs and Mirrorless, they have...

Depth of Field control via the choice of aperture and lens type.

Lower image noise in all situations due to vastly larger sensors.

Higher resolution, currently 16-50mp or so.

Superior low light performance, especially for the full frame versions.

Easy telephoto lens options.

Now are any of these advantages unassailable by modern smart-phones? No, all can or will eventually be dealt with via computational imaging methods and hardware developments, and while the current implementations may be less than perfect, the rate of development is extremely rapid.

The last 12 months alone has seen enormous improvements, it's like the manufactures have found top gear and switched on the nitrous oxide. Even the portrait mode options on iPhones have leapt ahead via alternative apps and to be honest most casual shooters were fine with the results from the initial product releases. Will serious shooters ever be happy with the “portrait” results, yes, I am sure they will, though prejudice will get in the way for some time yet.

Smartphones also have core advantages over regular cameras......... 

They're always in your pocket, something not to be under-rated!

Smartphone screens are vastly better than virtually any traditional camera screen, which can make composition and playback far more pleasurable, so long as you can see the screen in the first place.

Smartphones have connectivity to the net at all times.

There are vast "in phone" editing options for both raw and compressed files which easily exceed anything on regular cameras.

Smartphone ease of use when you just can't be bothered and want to shoot in auto is pretty amazing and consistent.

A smartphone tends not to intimidate your subject or users for that matter.

There are still clear disadvantages, and of course, many traditional photographers are super keen to point these out.

Smartphone low light performance still sucks if the situation gets dire.

The ergonomics are just horrible; they’re slippery and hard to hold steady, it's often a case of form limiting function.

Lens options are not great, most "add-on lenses" are terrible and the attachment methods are suspect at best, and there's no universal standard for lens attachment in the first place.

Screens despite being terrific can be tough to see in bright sun.


To pull the advantages and disadvantages apart let's see where the truth in 2018 sits.


Top drawer smartphone lenses already resolve at extreme levels, taking sensor size into account the performance of the better lenses are excellent in all measurable ways, including chromatic aberration, vignetting, cross-field clarity and contrast. My tests with DNG files have shown some are considerably better than almost any regular camera lens. 

As smartphone manufacturers add additional lens/sensor modules, the resolution gap between regular cameras with their multiplicity of lens options, and smart-phones narrows.

Traditional photographers often get huffy and dismissive, dealing out the old “Smart Phone image quality is poor" card.  Perhaps the differences are much less evident than they believe, the improvements from one model smartphone to the next is usually quite profound, and the image quality difference between say an iPhone 6 and XS series is enormous. Smartphones are usually kept for at least a couple of years, so many photographers likely have no current yardstick to compare to.

At this point, I'll mention that old chestnut of a counter argument many photographers use in refutation of smartphones as viable cameras.  "Buying a new phone every two years makes it an expensive camera".  I find this argument disingenuous at best; you hardly buy a smartphone to just use the camera, surely you'd use the net, make calls, keep a calendar, create notes and all that other stuff, the camera is a very handy bonus.  

When you take into account all the device does, the value factor is very high, and for many people, it might mean you don't need actually to spend money on a traditional camera at all!  (That's, of course, the last thing Canon and Nikon want to hear, but it's no doubt music to Apple and Samsungs' ears.)






The Zoom Future


Most serious shooters believe that smartphones are no good for sport and birding, true enough, but this is a relatively small subset of most peoples needs. Fear not, the folded zoom smartphone is now on the horizon.  

Folded zooms will probably give you at least 125 to 150mm equivalent, still not great for sport or birds but better than what you have now and no doubt that range will extend with time. However, for at least the next few years you'll undoubtedly need a DSLR or Mirrorless camera if you want real telephoto reach. In the interim cameras like the brilliant fixed lens Sony RX10 mk4 seem all the more sensible and appealing.

Combining folded zooms with various types computational methods however may actually extend the zoom range well beyond say the 150mm equivalent mark, it wouldn't surprise me if in another 6 or so years we have smartphones that can offer say 400mm equivalent in our pockets!


Poor Quality in Low Light? 


Again true, but the performance of smartphones is quickly improving and not all that bad. While many regular cameras offer excellent results in low light, users still often struggle with getting sharp focus, adequate depth of field and eliminating camera movement. However,......if you want to shoot seriously low light situations or capture star-fields, well then, you can forget any current smartphone option.

The ace up the smartphones sleeve is the now common use of image stacking methods to reduce noise in increase quality, whilst quality is still not as good as having a large sensor, the results using new apps and computational methods are becoming quite acceptable. The pics of the church interiors I have used in this article are a good demo, they were shot using the Adobe DNG HDR function in Lightroom Mobile, the scenes have enormous contrast and low light levels but the results are really quite acceptable and would print fine.


Focus Issues? 


This is just not true today, many phones use both phase and contrast methods, lenses can focus very quickly due to the low mass involved, I'd say we are just one generation short of seamlessly good focus across all situations. 

Focus issues still occur in very low light or where there is insufficient contrast, but many regular cameras struggle under the same circumstances, I could name several DSLRs that give up the ghost as soon as contrast falls off and many that have especially difficulty when in live view.


Not for portraits? 


This is not strictly true today, and the situation is rapidly getting better. 

Some newer smartphones have tele lenses in the 70mm equivalent range which with a little cropping is close to perfect for most closely framed portrait needs, but even so, great portraits can be shot in the 50 to 70 mm equivalent range.  

The real change factor here are the new-fangled portrait modes that simulate depth of field and lighting effects, which leads us to the old.....DOF is hopeless with smartphones argument.  Look, I honestly think only a few photos need super shallow DOF to visually work, in some ways that shallow DOF look is a crutch for those unable to, well, compose the shot, I certainly feel it's used as a default way too often.  

I don't think it is critical to really simulate that "shot at f1.2 look" but some decent depth of field control is desirable, and DOF simulation is THE major frontier of development at present.  Many photographers appear to have an ethical issue with simulated DOF, but in the long term, it should be possible to simulate the look of pretty much any lens and aperture setting and who cares how that's achieved, so long as the images look pleasing.

It is even possible to now shoot in DNG and get post capture aperture setting with the right apps on an iPhone X, but just imagine where we will be given another 2 years development.





A portrait of my wife on her 58th birthday, taken in portrait mode on my iPhone X, sure it's not perfect but most people would be more than happy with the result.


Resolution


As for resolution, currently, the default seems to be around 12 to 16 megapixels, enough for any sensible print size and already way more than 1 to 2 megapixels needed for social media and web use.  

There are a few outliers in the mobile world with higher resolutions, but the benefits have proven marginal due to smaller pixel sizes and lens limitations.  A point to note is that to double the resolution from the current 12-16 megapixels, you'd need to go to about 64 megapixels. The real world difference between 16 and say 24 or even 30mp is not that great regarding print size possibilities.  

I feel that most of the bad press for smartphone resolution is the result of stupidly high levels of JPEG compression and noise reduction, something that becomes immediately obvious if you shoot in RAW/DNG on any late-model phone.  DNG, is a game changer when you want fine detail, though despite it being an option for a while it's only recently consumers and even serious photographers have become aware of the option and its impact on mobile image making.



DNG and Small Sensors


My experience is that smaller sensors benefit more from RAW/DNG than larger ones. At the pointy end of mobile imaging, RAW makes a huge difference and is not difficult to use.  Smartphone users have less hassle with DNG/RAW than regular cameras users do because the conversions can be done to perfection on the device.  Note though, you have to have an alternative camera app on many smartphones to access RAW capability, for example the standard iPhone camera app still does not offer DNG/RAW capture.

Some regular cameras offer “in camera DNG/RAW conversion” but usually it's clunky and minimal, few photographers would bother with it, the two methods cannot be compared.

Increasingly, better capture apps are adding RAW/DNG, though truthfully none as yet offer the perfect implementation, but the improvements are coming thick and fast, weekly at the moment. 

Appropriately processed raw files can look quite analogue due to the noise characteristics and saturation mapping if exposure is held back at low ISOs, but even at higher ISOs the improvement in detail offered by shooting RAW is extraordinary. Generally I find that Phone makers attempt to eliminate all image noise if possible within the compressed files, but this trades of detail and texture, with RAW files you can tune noise to taste and in real usage a little noise is not a bad thing and leaves the image looking more organic.

I imagine that we'll soon see the combination of DNG/RAW and smart in and off-phone computational processes to push the quality to even higher levels and provide extra flexibility, the possibilities really excite me.

The hardware, of course still plays a significant part in quality and makers have pushed the physics of smartphone sensors and lenses a long way in the past 3 years but most of the big improvements have been related more to processing and computational methods. 




This pic, taken in an underground exhibit in Dubai looks a little soft, it was shot using Hydra on an iPhone, an app that image stacks as many as 30 or more frames. But this is desperation stakes because the light level was incredibly low, my regular camera was showing something in the area of  2 sec at f4@ 400 ISO, but I would have needed f5.6 min to get the DOF I wanted.  In the end the iPhone X captured the shot at 1/15 at f1.8 at ISO 400, (per frame) so it's pushing the envelope. The end result is not perfect but it tells the story OK.



Shadow Noise and Quality


The limit on image quality for smartphones isn't 'highlight' rendering, as some photographers probably assume, after all most of us hate that bleached out highlight look. Rather it's shadow noise that knocks the stuffing out of image quality. 

Consider this, if you reduce the noise at the shadow end you can cut back exposure to control highlights and thus get good tonality. Image stacking methods makes it much easier to obtain low noise levels and certainly native shadow noise is vastly improved on the current crop of smartphone sensors compared to those of just a generation or two ago.

There are many pathways to crack this shadow noise nut, we could:

Image stack a series of identically exposed images, none of which clip the highlight tones.

Use HDR methods with 2, 3 or more images exposed at different shutter speeds.

Fuse the outputs of several lens/camera modules that are identical but exposed without any highlight clipping.

Fuse the outputs of several lens/camera modules that are not identical.

Fuse the outputs of two camera lens modules, one for colour information and one for monochrome information.

Implement a sensor that has variable ISO at the pixel level.

Moreover, there are other ways as well, the point is, regardless of the method used all involve significant computational methods to create an end output and all are far easier to implement with small sensor smartphones than larger sensor DSLR and Mirrorless cameras. 

Various players in the field have already implemented all of the above methods, most are not yet fully ready for a full frontal prime-time DSLR attack, but you can be sure that they soon will be.

Other direct hardware options could include even deeper pixel wells, pixel binning from higher resolution sensors, more precise control of variable pixel quality across the sensor, precise pixel mapping for the whole sensor, wider apertures, (f1.4 should be possible with current technology), a combination of mono and colour sensors for better capture.  

Ultimately I reckon all current perceived low light/quality limitations of smartphone cameras will be solved by a combination of hardware  (especially global shutter options) and software changes that use computational methods.  

Going further into the future, we could have switchable pixel filters, capturing RGB and luminosity in ultra quick succession and vastly improved stabilisation options that use lens and sensor along with delay options to out do Olympus.  The latter would allow us to use longer exposures and thus lower ISO settings more easily. (Provided subject movement was not an issue, and even then there are well established computational solutions to help with that too.


Bit Depth


With the base level image noise reduced, it's likely we'll see higher bit depths for the RAW capture, currently, the base noise level negates the benefits of higher bit depths.  Higher bit depth   capture should reduce the banding we sometimes seen in yellows and skies even with RAW files, it's an issue that has remained a PIA for many serious mobile shooters, a fix would be most welcome. 

Summing Up


In the end, it's just much easier to implement advanced computational options when the device has a small lens and sensor (or multiple sensors), ultra fast shutters, powerful processors and constant net connectivity. 

The current target is really about bringing smartphone "image quality and look" up to the level of DSLRs and Mirrorless cameras without the downsides of complexity and weight because that's what the mass market wants and will pay a premium for.  Of course many of theses processes can be applied to regular cameras (some already are) but ultimately the cost is prohibitively higher and the benefits far less beneficial to end users. 

Finally, even now for a vast number of users, the image quality from the latest smartphones is entirely sufficient, many consumers instead want better ergonomics and flexibility for general shooting needs.  I for one, imagine some sort of standardised accessory lens mount and high-end lenses to match would be a winner judging by the number of people I come across struggling with the current offerings.  

Most smartphone shooters are not too worried about shooting sport, extreme low light, star-fields or professional jobs and those people who are own other cameras for those purposes, but, if their future smartphone could do a passable job of those tasks, that'd be rather nice.

The smartphone is currently not the answer to all our photographic needs, but increasingly with computational imaging options, it's becoming the answer to a broader array of them and along the way the improvements open up a whole array of new creative possibilities.

In the end, aside from some possible ethical considerations computational methods can only be a good development for world of photography, the future is exciting.


Links you may like to try to dive a little deeper:



https://www.youtube.com/watch?v=Gk7FWH12WLI

Video on the visual core used in the Google Pixel



https://blog.halide.cam/iphone-xs-why-its-a-whole-new-camera-ddf9780d714c

Article on computational photography as applied to the iPhone XS


https://gearburn.com/2016/07/smartphone-computational-photography/

Short article on computational photography processes from 2016, we've come a way since then.