It doesn’t work. Let’s be honest. Pixel shift is a brilliant, brilliant idea in a pantheon of brilliant, brilliant technical ideas. But it doesn’t work. And here’s why.
As all clickbait titles, this one is a partial lie 😉 Do you remember when Hasselblad showed their first pixel-shifting results using their large 100Mp sensor? Not only did it work, but it blew my mind.
Even as a non-repentent luddite, I absolutely adore the idea of pixel-shifting. Most of the time (99% of the time) there really is no need for very high resolutions in photography. They are a complete waste of time, natural resources and mental focus. But, on those rare occasions when the scene and intended photographic use warrant the big numbers, wouldn’t it be brilliant to be able to pull out the big gun and grab that perfect 200Mp photograph (think Candida Höfer, for instance)?
What a shame it doesn’t work, present Hassies excepted, then 😉
Now, before we jump into why it doesn’t work, you might challenge this claim. Particularly as my main camera doesn’t even have pixel shift, making my hands-on experience of it about as meaningful as western leaders experience at curbing a pandemic.
But I’ve tried it by proxy. Philippe did quite thorough testing with the Sony A7r4 and it really took some concentration to find differences between files made with or without pixel-shift. In a blind test, I would never had found the shifted files. And yes, he was using a tripod, a timer and a good lens.
And I’ve looked at many other files made with other pixel shift cameras, with equal lack of impression. Which doesn’t mean pixel-shift can’t be made to work. Only that, in real life conditions, it’s really difficult. Here’s why.
There are the obvious reasons everyone knows about :
But it goes further. A moving referential (such as the subtly trembling ground of all large cities with cars, buses and underground trains) will also create more blur. And, of course, wind will deteriorate quality. As will a clunky shutter. And any slight loosening of anything in the imaging chain.
So, pixel shift can work, and does work, in lab conditions and careful post processing. Anything approaching real-life conditions very often stretches the very tight tolerances for success a step too far. But’s that’s still not why pixel-shift doesn’t work. Lenses just aren’t sharp enough!
Another way to put this is that pixels are too small. Here’s where Harry Nyquist and Claude Shannon come in handy to explain. And let me call Zeiss to illustrate when and how it can work.
Below is the MTF curve for the Zeiss Otus 100, possibly the closest to ideal lens Zeiss have ever made. The lines show the contrast with which finer and finer details are transmitted to the sensor by the lens. Just a reminder : Zeiss (and a couple of other manufacturers) measure these MTFs. Most other companies calculate them theoretically, as a theoretical lens with ideal tolerances and surfaces would produce.
Theoretically, a politician elected by the people and paid by the people’s taxes should have the people’s best interest at heart. In real life, just look around. Theoretically, if we all wore masks, Covid would be extinct in 3 weeks. In real life, it’s been around for more than a year and most predictions now stretch to 2023. In theory, trains run on time. In real life, I’ve seen people cry on the platform “no, not another cancelled train, I cannot lose another job”. Real life measurement have nothing to do with theoretical projections. Hence the selection of Zeiss measurement (for what is possibly their best measuring lens ever).
Consider this to be the best performance money can buy. If there is a better measuring lens out there, it won’t be better by much (and please don’t point me to measurements done in a garage, a proper MTF measurement rig costs multiple 6 figures).
The bottom lines show us roughly 80% contrast transfer at 40 line pairs per second. At f/4. You can roughly extrapolate this by doubling the contrast loss for every doubling of the resolution. So possibly 60% contrast transfer at 80 lp/mm and 20% at 160 lp/mm. Those 20% contrast lines (starting with pure black and pure white lines in the chart) would be pretty close to the threshold of detail visibility for real life details. So, in a world without diffraction (see discussion about aperture, below), the Otus 100 would max out at about 160 lp/mm.
That’s roughly 3 micron pixels. At the best aperture on (possibly) the world’s best aperture.
Enter sampling and the Nyquist-Shannon theorem. Simplified, the theorem states that the maximum frequency you can sample without aliasing with a sampling frequency of f is f/2. In sensor terms, this means you need 2 pixels (along each axis) to sample a detail the size of a pixel. So, to make use of the Otus 100’s 160lp/mm max resolution (3 micron), you need pixels half as large in each direction. So, 1.5 micron equivalent pixels. Considering most pixels fall in the 3-6 micron range, there’s a lot of room here to double the resolution and make the most of pixel shifting, in theory.
If you have the shooting ability to exploit pixels that small (no vibration at all, eg) then you see that, in theory, you can make pixel shift work with the Otus 100 even starting with the very small pixels of today’s sensors.
You also see how excellent the lens has to be using current small-pixel sensors, and how perfect your technique has to be to maintain 1 micron-ish accuracy (vibrations …) throughout the multiple exposures, to get the best out it.
With less exceptional lenses, the real-life MTFs might max out at 80lp/mm. Meaning you can get away with lesser technique, but also that small pixels will start oversampling the details.
And that’s where the problem is. What we are seeing more and more are ultra high density sensors with tiny pixels and lenses that are nowhere near as good as that Otus, whatever manufacturers and fan clubs may claim. Using my 120 Macro on the large pixel X1D camera, files look every bit as sharp at 100% as they do at 20%. With other lenses, particularly older adapted lenses, 100% is nowhere near as percepetually sharp as the global view of the photograph.
In fact, that’s quite a good test to see for yourself whether a lens will be suitable for pixel-shifting. Look at a photograph made with that lens in your PP software. Then switch to 100%. If you notice a drop in sharpness, forget it. How could finer sampling of what the lens projects possibly be sharp if it isn’t perfectly sharp at the normal sampling?
And then, there’s aperture …
In theory again, the wider the aperture of a lens, the sharper in can be. In real life, aberrations always make full aperture significantly less sharp than closed down a few stops. But, let’s stay in the realm of theory for a while.
At f/2, a perfect lens has an airy disk of 2.6 microns. At f/8, a perfect lens has an airy disk of over 10 microns. And at f/22, a perfect lens has an airy disk of almost 30 microns.
The airy disk is the disk of light produced by a lens imaging a pinpoint light source (infinitely small, such as a star). The disk is surrounded by rings of descreasing intensity. Using the size of the airy disk at the smallest possible detail the lens can record on a sensor is a wildly optimistic hypothesis but let’s go with it.
You can see that at f/8, that smallest detail is already 10 microns wide. More realistically 15-20 microns. You can sample that as much as you want with as much pixel-shifting as technology will allow, you’re just oversampling a coarse detail (unless you statred with really mahussive 30 micron pixels 😉 😉 ) So, if that pixel-shifted master-shot is a landscape and you were thinking of using a small aperture to maximise depth of field, well … ain’t happening.
So, there you have it: Nyquist was born in Sweden, which is why pixel shift only works on Swedish cameras such as the Hasselblads ! 😉
More seriously, that Hasselblad shocker was made in a studio using a sensor with large pixels, a lens probably as good as that Otus and deliberate post processing. It can work, we’ve seen it happen. But it don’t work easy and it don’t work often.
As Jim Kasson has shown far more technically than me, pixel-shift has other benefits, mostly a reduction of aliasing in 16-frame executions. In very specific conditions, for very specific uses, it is a very useful technique. But for increasing resolution of landscapes or other spectacular scenes, forget it.
Sidenote: a related technique is used in astrophotography. It was invented by Nasa decades ago to compensate for the unsharp images of a Hubble camera. When the camera undersamples (ie has pixels too big to make use of all the information provided by the lens) the image projected by the lens or telescope (which is not the case with small pixels and average lenses), dozens or hundreds of individual photographs can be dithered (shifted and rotated very slightly – by subpixel amounts – in relation to one another), and stacked onto a layer with 4 times the original resolution (doubled on each axis) and the algorithm infers the missing data, producing a higher resolution final image (albeit with more noise).
The initial premise is that the lens has much higher resolution than the sensor can capture. The opposite of today’s usual operational conditions in the amateur photography world. When you see high-resolution pixel-shifted images from cameras with small pixels, particularly phones, you are in the presence of very clever inference algorithms, not free data 😉
Me? I stitch. Works a treat.
Never miss a post
Like what you are reading? Subscribe below and receive all posts in your inbox as they are published. Join the conversation with thousands of other creative photographers.
Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.