Open question: is the RAW recipe in the A7rII the same as the RAW recipe in the A7r ?
Officially, bit depth hasn’t improved. And flame wars still rage on the anti-Sony blogs and forums. One recent comment about Sony’s RAW read “Disgusting. Criminal. Unforgivable.” Eye-opening, really. I had naively thought rape and murder were unforgivable but it appears that Sony’s management of RAW files belongs to that same category of evil. Eye-opening, I tell you.
For those unaware of the broil, here’s the quick lowdown: even for its flagship cameras, Sony has been using compressed RAW, with the benefit of lower file size (about 50% off Nikon’s D800e, which I used to own, for instance). Compression robs information, and Sony has been getting some severe stick for it all along. Compression, up to a point, can be lossless, meaning you can decompress the file to its original quality. Lossless compression algorithms are more or less all equivalent in their not-so-great performance (compared to no compression). A more important concept is perceptual loss. Here, you enter a hazy zone which does not operate with an on/off switch (lossy/lossless) but a slider(more or less lossy). What many are asking Sony is a lossless compression, which is one possibility. The alternative, for Sony is to find a setting and algorithm that optimizes the benefits of compression while not introducing harmful artefacts. That solution will never please everyone because some applications will always bring up something nasty. Sony knows who its important clients are and will undoubtably optimize for their use case.
My question in this article is “Has this improvement process already secretly begun with the A7rII ?”. For one thing files are no larger than on the A7r. If anything, they’re smaller. Then, there’s this …
Wha’ever. Does unchanged bit-depth mean altogether unchanged RAW ? For the A7rII, Sony’s obviously gone to tremendous lengths to answer most of the past criticisms raised against the A7r and more. Is it inconceivable that a discreet algorithmic upgrade of RAW processing may have slipped into production cameras, Sony keeping it quiet to stay out of troll’s way ?
Yesterday, I set out to make a few pics of the milky way before having to send ‘my’ Distagon 2.8/15 ZF.2 to Zeiss (review on A7r & A7RII coming soon).
My usual processing of these files involves pushing saturation & vibrance to the max to set white balance, before sliding back down to more normal settings, as described in this previous article.
Doing so with files from the A7r produces square artifacts around stars, as seen above. The same intermediate processing on A7rII files did nothing of the kind. See below a file at a similar stage. Stars are (1) out of focus (2) slightly elongated due to the long exposure. But there is nothing weird going on around them.
Now, this is not to say lossy compression is ideal or perfect. And, if it adds significantly to the already improved image quality, we’re all for lossless RAW compression, here at DS. But something already has changed and, apparently, in the right direction. If someone out there knows better and can explain the nature of the changes (it could even be changes in the PP software, who knows ?), I’d love to hear from you. Please leave a comment.
And to round-up the discussion with a real-world photo, here’s what the milky way turned out like in my light-polluted village of the South of France. 12800 ISO 30 seconds. Not too shabby, right ? Go Sony.
For the anecdote: the tiny spec of light on the hill at lower-left (in the bright yellow zone) is a church built in a large grotto in honour of Mary Magdalene (who, according to local tradition, ended her years in prayer there), the exact location from which this sunrise photography walk article was written.
Please log in again. The login page will open in a new tab. After logging in you can close it and return to this page.