Here's a challenge for you brainy types:
There's a lot of evidence that in many instances, you can increase the level of detail a camera delivers by either improving the lens or by increasing the pixel number.
It seems to me logically that if one works, the other shouldn't. Because if improving the lens makes the picture sharper, then that means that the individual pixel is smaller than that len's blur point, and if that's the case then putting in more pixels should not make a difference if you don't improve the lens also.
And conversely, if improving the pixel density makes the picture sharper, that means that the len's blur point is smaller than each pixel, so therefore it should make no difference to improve the lens (without improving the pixel density).
Update: Ctein, a top dog in photographic technical savvy, helps me out:
A technically accurate explanation is very complicated, but I can give you the easy version by referring back to film photography (the mathematics is very different for discrete digital samples, so this is correct but inaccurate).
In analog, continuous imaging, the final blur circle is the square root of the sum of the squares of all the contributing blur circles. That is:
Blur(total) = SQRT (blur1^2 + blur2^2 + blur3^2 …).
Examples of sources of blur are film resolution, lens resolution, focus error, and camera vibration. For the sake of simplicity, ignore everything but film and lens resolution.
If you have a lens that is capable of resolving 100 line pair per millimeter and film that is capable of resolving 100 line pair per milliliter, the above equation says that the combined resolution will be 70 line pair per millimeter (approximately). In-camera film tests bear this out, by the way.
Suppose you were to double the film resolution. The combined resolution of film and lens would now be 90 line pair per millimeter, about a 25% improvement. A visible and significant change. Conversely, if you doubled lens resolution, you'd also see 90 line pair per milliliter. (If you doubled both, the resolution would jump to 140 line pair per millimeter.)
An asymmetric example: let's start with film that resolves 100 line pair per millimeter and a lens that resolves 150 line pair per millimeter. The combined resolution is 83 line pair per millimeter. If you double the film resolution, the combined resolution improves to 120 line pair per millimeter. If you double the lens resolution instead, the combined resolution is 95 line pair per millimeter. Improving either resolution visibly improves total resolution, even when one is much better than the other.
To put it colloquially, it's not the weakest link in the chain that determines the resolution of your system, it's the combined weakness of all the links. A chain with two weak links is less strong than a chain with one weak link.
As I said at the beginning, the math is very different for digital camera sensing, but it would lead you to the same conclusions ( although with different numerical results, of course) for pretty much the same reasons.
----
Thank you so much to Ctein (it's a hard C: "K-tein"). This is cool. It tells us that the two factors have to be way out of balance for it not to pay off to improve even the stronger one. (But of course it will always pay off better if you improve the weaker one.)
It's still counter-intuitive to me, but since it's supported both by math and by experience, I guess I'll have to bow to it! :-)
And by the way, an example of what this means in practical terms is that for me, it means that if I do eventually decide to get a high rez camera like the new Canon 5D, I don't have to invest in the very newest and most expensive lenses for it to make a difference (in big prints).
See, this was not just theory for theory's sake. :-)
Of course it's still a question of what that might mean for me artistically. And so far it seems to me that usually the answer is "not very much", so I'm cooling my heels. After a point, there's a diminishing "return on investment" in resolution. Where that point is, is a matter of application. Landscapes or architecture usually need much more resolution than portraits or snapshots.
7 comments:
Imagine you are capturing an image half of which as pure black and the other half is pure white. Suppose your lens is good enough to reproduce levels 0% and 100% on the sensor. You need just 2 pixels to capture such image. You increase the pixel count to 100,000,000 pixels and you still get half at 0% and half at 100%.
Suppose now that your lens is not so perfect and instead of 0-100 it produces a wide border of fluctuating brightness that eventually translates from 5% to 95%. Now two pixels is not enough: you want to have more just to see how bad your lens is. Using some weird math plus additional knowledge about z-axis of the scene the megapixels could help you restore the image damaged by the lens and bring out half at 0% and half at 100% - i.e. a two-pixel image.
Or, instead, you just pick a more expensive lens that allows you to stay with the old trusted two-pixel sensor...
Hej Eolake,
I have no idea how good your German is, but this site:
http://www.6mpixel.org/
has some very interesting thoughts on that subject!
Bedste hilsner
Kim
Tak for det, Kim.
Sadly my German is mediocre.
Dibutil: thanks. This hurts my head a bit, but I'll keep working at it. I am beginning to see the logic...
Dibutil: ... but: the sensor/camera/software does not *have* any "weird math" to restore the image.
Ah, they have an english page:
http://6mpixel.org/en/
In general I agree with them, but the trouble is that it's no absolute. For example, when the 39-megapixel camera backs came out, it turned out they did really enhance resolution, despite the fact that the limit of the lenses should have been reached long ago.
Eolake,
Camera/RAW processor have at least sharpening - the way of restoring contrast edges. Also they apply contrast/curves/profiles etc.
If 40Mp are said to improve resolution over 20Mp that means that the lens limit was not reached.
A while ago I remember discussions about megapixel vs. actual number of details captured. My first digicam ever (Minolta 7Hi which still works fine) the 5 Mp sensor in conjunction with 28-200 zoom measured 2 million details tops.
Once there is a method of such measurement we could talk about lens limit as well as resolution improvement by megapixel increase.
Unfortunately the actual detail number seems discouraging compared to megapixel and I have not seen any discussions about it in years.
"There's a lot of evidence that in many instances, you can increase
the level of detail a camera delivers by either improving the lens *or* by increasing the pixel number.
Yes, this is true. Although it is very counter-intuitive, experimentation shows this to be the case. Every element in an imaging chain affects the final resolution. The mathematical relationship is:
1/Rt = 1/R1 + 1/R2 +1/R3 .....
Where Rt is the overall resolution and R1, R2 etc are the individual resolutions.
In a digital camera there are only two factors, the lens and the sensor.
It is hard to understand.
Steve
Post a Comment