New test numbers say the Nikon D3x is better than a Hasselblad, but Mike R says you can't believe that, which I can believe. (This article may confuse by having the same introduction as the one I linked to yesterday.)
Discussing RAW files, Bert informs:
I guess you'll have to take my word for this, but it is a fact that large image sensors used in all but the most expensive cameras (NASA-grade & up) have some flaws, which need to be corrected either through specialized hardware or low-level firmware.
And when I say large sensors, I mean just about anything above the camera phone.
So it's a fact of life that there is no such thing as true RAW data coming out of DSLRs. There is always some amount of cooking before delivery, or else you would see huge output variations between otherwise identical units.
High-end professional equipment (Red & such) users know and understand this, and will prefer to have compensation done in post-processing, allowing for periodic calibration of the equipment. In any case, true RAW data looks better than movie negatives do, trust me on that (screening non-color-corrected rough cuts can be downright painful!), so they are used to working with raw materials that look like shit, pardon my language.
But nobody in his right mind would want to attempt educating the more casual users to such realities, so "normal" cameras are factory-calibrated, and that's not about to change. Much more convenient for you and me anyway. Might mean that the camera's output will become slightly sub-optimal in time, but the short lifespan of most products makes this irrelevant.
So, now that we have established the need for in-camera processing, we have to decide where to draw the line. How raw is RAW? Every manufacturer will have a different answer here, and that answer will even change across specific product lines.
Point & shoot cameras will always overcook everything, it's what is expected from the thing anyway, so RAW will only mean uncompressed, and perhaps wider dynamics (i.e. 10/12/14 bits instead of 8), but hardly more.
The more you climb up the scale, the more honest the data may be (better quality sensors to start with anyway), but there will always be some subjective choices made by the manufacturer. To use an imperfect analogy, audio buffs only have to think of how the Japanese have never been able to build decent speakers, because they simply cannot resist coloring the sound. Same here, some manufacturers will be more "flat", while others won't be able to resist pushing the line.
13 comments:
Reichmann's argumentation is right on the money, both on the resolution and the AA issues, as far as I'm concerned.
"AA"?
AA = Anti-Aliasing.
Should have (more accurately) said in-camera processing of RAW data.
Yes, that's also the part I found interesting, and I said so on tOP. If Mike is right, that might make an irreconcilable difference.
What I don't get is, how can they call it "raw" data if it's processed? I don't get humans.
You could call it half-cooked, if you wish. :-)
I guess you'll have to take my word for this, but it is a fact that large image sensors used in all but the most expensive cameras (NASA-grade & up) have some flaws, which need to be corrected either through specialized hardware or low-level firmware.
And when I say large sensors, I mean just about anything above the camera phone.
So it's a fact of life that there is no such thing as true RAW data coming out of DSLRs. There is always some amount of cooking before delivery, or else you would see huge output variations between otherwise identical units.
High-end professional equipment (Red & such) users know and understand this, and will prefer to have compensation done in post-processing, allowing for periodic calibration of the equipment. In any case, true RAW data looks better than movie negatives do, trust me on that (screening non-color-corrected rough cuts can be downright painful!), so they are used to working with raw materials that look like shit, pardon my language.
But nobody in his right mind would want to attempt educating the more casual users to such realities, so "normal" cameras are factory-calibrated, and that's not about to change. Much more convenient for you and me anyway. Might mean that the camera's output will become slightly sub-optimal in time, but the short lifespan of most products makes this irrelevant.
So, now that we have established the need for in-camera processing, we have to decide where to draw the line. How raw is RAW? Every manufacturer will have a different answer here, and that answer will even change across specific product lines.
Point & shoot cameras will always overcook everything, it's what is expected from the thing anyway, so RAW will only mean uncompressed, and perhaps wider dynamics (i.e. 10/12/14 bits instead of 8), but hardly more.
The more you climb up the scale, the more honest the data may be (better quality sensors to start with anyway), but there will always be some subjective choices made by the manufacturer. To use an imperfect analogy, audio buffs only have to think of how the Japanese have never been able to build decent speakers, because they simply cannot resist coloring the sound. Same here, some manufacturers will be more "flat", while others won't be able to resist pushing the line.
Thank you very much. Interesting!
Why would output change over time?
And why would raw file cooking change that fact? (I guess the cooking "looks" at the file first.)
Why would output change over time?
It would be impossible to give a definitive answer to this question. Mostly because of the number of variables involved, but also because the details of such an answer are likely to be semiconductor-process-dependent (i.e. not all sensors will age the same way, it really depends on the fabrication recipe).
But if you think of an image sensor as a sandwich of thin layers, each barely more than a few atoms thick, you will realize how fragile it really is. It definitely is possible to alter such delicate structures by bombarding them with energetic particles, which is precisely what happens when the sensor is exposed to light.
The odd high-energy photon that will reach the sensor with just the right energy to break an atomic bond might not do much damage, but long-term repetition of such accidents will lead to changes in performance.
And there are many, many other similar aging processes related to temperature, static electricity buildup & discharge, electro-migration, etc.
And why would raw file cooking change that fact?
The basic principle of sensor calibration is simple: look at a succession of known targets, and build a map of the differences between the captured images and what is expected. Then, using this same map (stored in the image-processing engine), you can compensate for variations and defects.
That's where it becomes tempting to try to "boost" the output to compensate for the shortcomings of a given sensor type. But once you enter that loop, RAW is no longer raw...
There something that needs to be understood about the comments I made above: some degree of cooking is definitely desirable.
The "raison d'ĂȘtre" of RAW shooting is not to please the forum retards in search of perfection.
Dead pixels are present on many (if not most) sensors, just get used to the idea. But who wants to see a dead pixel? That's why they are systematically fixed by in-camera hardware, and that's OK. The argument that one would need the ability to choose the substitution algorithm to use has no place in photography applications. Heck, if there was such a difference between one method or the other, then it would be possible to identify the substituted pixels at post-processing, making the "need to choose" argument moot anyway.
RAW exists primarily as a means of delivering images free of compression artifacts, and that's crucial. Retaining all of the original image dynamic is also of immense benefit in many situations.
But there definitely would be no benefit to gain in having access to the real output of the sensor, with all its flaws and shortcomings. Much intelligence and know-how has been infused in the correction & calibration hardware and software of high-end cameras, and this should rightfully be considered as part of the imaging device.
That's one reason why the same sensor used in cameras from different manufacturers may yield very different results. And that's OK. That's what true brand names are all about.
Thanks. All makes sense.
BTW, do you photograph?
BTW, do you photograph?
Used to, but not for a loong time. Burned myself in the darkroom, just got fed up of being in the dark all the time.
Been meaning to start again with digital, but life has changed, and I just can't seem to find the time nor the energy. T'will come back, though.
I couldn't resist at least owning a digicam. Even the tiny ones make huge pictures now.
Oh, I have one of those. Even have access to a high-end Canon system. It's the drive that's gone.
Post a Comment