Imagine such a camera based on the Nikon D3 sensor. A sharpness equivalent to 20 megapixels, and nearly grain free images shot at ISO 25,000! You'd be able to make pin sharp hand-held photos in conditions so dark you could barely see what's going on.
Update:
This article from 2004 has data of a professional BW camera Kodak made, but never perfected.
"Without an anti aliasing filter and no Bayer color matrix, the resolution of a 6 mega pixel monochrome camera is astonishing. In monochrome, 6 mega pixels effectively does what it takes 12-24 mega pixels with a color matrix." [...]
"I ended up in shock at watching exposure times go from 1/60 or 1/125 of a second with my Leica M6 and film, to 1/800, 1/1200 and even 1/1600 of a second for the same aperture with the DCS 760m. With a base ISO of 400 exposures times are brisk – another advantage of a digital monochrome over a color based sensor." [...]
"The image quality certainly beats the pants off of anything I have seen on film in medium format and often what I have seen in large format in terms of resolution, gradation and dynamic range.
So why am I crying in my soup at this point?
Along the way, a serious problem came up with the images from my [Kodak] 760m. There is a form of “banding’ horizontal to the frame as the data comes out of camera."
Bert helps:
"I think the color bayer filter cuts out like 70-80% of the light, that's two stops."
Well, I don't know the numbers, but filters by nature block out part of the light, and a substantial part too, in the case of RGB filters.
I am no optician, but the Bayer arrangement always boggled me. There are two green pixels by image cell supposedly to increase the luminance channel's responsiveness. If that's the goal, why not leave the fourth pixel uncolored? This would provide a serious boost in sensitivity in low light, and mimic the eye much better(1).
I suspect that it has to do with charge bleeding, a phenomenon by which a strongly charged pixel will leak charges into neighboring, lesser charged pixels (this was quite a problem in early CCDs, and most likely still is a concern). The greater the charge difference, the bigger the problem, so ultra-sensitive pixels embedded into a filtered matrix may be more of a problem than an asset.
Mind you, I remember reading a few month ago about an SLR with a sensor that does just that (isn't it from Sony?). It has small, (uncolored?) pixels interspersed withing the Bayer matrix to increase dynamic range. I should have paid more attention...
In any case, it is certain that a pure B&W sensor is incredibly more sensitive (I own a few such video cameras, used for inspection & machine vision). I never measured it, but I would say that the sensitivity difference is more than two stops. But try to sell B&W sensors to the general population...
(1) The retina is composed of two different types of light-sensitive cells, namely cones & rods. (Apparently, a third type was recently identified. A lot more info in
The brain is the key element in this system. It uses the information from the rods to compose and decode an image, and then uses the much sparser color information from the cones, when it is available, to add color to the overall picture. This is what newborns are so busy learning to do when they fiddle endlessly with small, well defined and brightly colored objects.
During this learning phase, the preprocessor located in the optic stem learns to recognize basic primitive shapes, to which color information can be easily applied. We are still far from the day where a camera with similar capabilities will become available.
Also worth noting is the sensitivity of the rod cells. While the color information simply vanishes in low light (try to reliably detect color just from the moonlight), our vision is still quite good. In fact, I have read somewhere that the sensitivity of the rods is such that they react to a change in illumination corresponding to a candle being lit in Paris... for an observer standing in New York.
And we wonder why images taken by mere electro-mechanical contraptions never really represent the world as we see it through our own eyes...
Update: Please excuse the many typos in my previous post, I'm still trying to get used to a new keyboard. I'll try to do better this time.
One thing I forgot to point out previously is that scientific instruments never use Bayer-filtered image sensors because of the extremely poor resulting information and the huge sensitivity penalty. This goes for all astronomical observation equipment (well, the few visible-light telescopes remaining, anyway), as well as other space exploration devices.
For example, all cameras aboard the many Moon and Mars landers are monochrome, yet equipped with filter wheels. When a color image is needed, a series of exposures are made, using various filters (seldom RGB, other bands are usually far more revealing).
For now it looks like you'll either have to put it in the b&w mode or shoot color and then go through channel mixer, channels, conversions to get what you want. besides if you have a camera that can shoot color and bl&w you've cut your expensive on the pricey jewels.
ReplyDeleteWould be interesting to see your mathematical data for your suppositions. One shouldn't make statements involving physics or other sciences with the word 'probably' in them. Even if it serves to help fill pages of a blog.
ReplyDeleteIt'd be a thing for LEICA or HASSELBLAD to try to fill this B/W - niche, and I'm still waiting for such a wonderful camera such as the MINOX spy camera, but in digital. Or have you fallen asleep after all, Minox engineers, can you even hear me??
ReplyDeleteYabelKaache,
ReplyDelete... Maybe Bert can back me up here: one thing is what the math and physics predict, another thing is what happens in real-world engineering.
I think the color bayer filter cuts out like 70-80% of the light, that's two stops.
I think the color bayer filter cuts out like 70-80% of the light, that's two stops.
ReplyDeleteWell, I don't know the numbers, but filters by nature block out part of the light, and a substantial part too, in the case of RGB filters.
I am no optician, but the Bayer arrangement always boggled me. There are two green pixels by image cell supposedly to increase the luminance channel's responsiveness. If that's the goal, why not leave the fourth pixel uncolored? This would provide a serious boost in sensitivity in low light, and mimic the eye much better(1).
I suspect that it has to do with charge bleeding, a phenomenon by which a strongly charged pixel will leak charges into neighboring, lesser charged pixels (this was quite a problem in early CCDs, and most likely still is a concern). The greater the charge difference, the bigger the problem, so ultra-sensitive pixels embedded into a filtered matrix may be more of a problem than an asset.
Mind you, I remember reading a few month ago about an SLR with a sensor that does just that (isn't it from Sony?). It has small, (uncolored?) pixels interspersed withing the Bayer matrix to increase dynamic range. I should have paid more attention...
In any case, it is certain that a pure B&W sensor is incredibly more sensitive (I own a few such video cameras, used for inspection & machine vision). I never measured it, but I would say that the sensitivity difference is more than two stops. But try to sell B&W sensors to the general population...
(1) The retina is composed of two different types of light-sensitive cells, namely cones & rods. (Apparently, a third type was recently identified. A lot more info in Wikipedia). Cones detect color, while rods don't. The rods outnumber the cones by 10:1 or more, and are also much smaller, yielding a far better resolution.
The brain is the key element in this system. It uses the information from the rods to compose and decode an image, and then uses the much sparser color information from the cones, when it is available, to add color to the overall picture. This is what newborns are so busy learning to do when they fiddle endlessly with small, well defined and brightly colored objects.
During this learning phase, the preprocessor located in the optic stem learns to recognize basic primitive shapes, to which color information can be easily applied. We are still far from the day where a camera with similar capabilities will become available.
Also worth noting is the sensitivity of the rod cells. While the color information simply vanishes in low light (try to reliably detect color just from the moonlight), our vision is still quite good. In fact, I have read somewhere that the sensitivity of the rods is such that they react to a change in illumination corresponding to a candle being lit in Paris... for an observer standing in New York.
And we wonder why images taken by mere electro-mechanical contraptions never really represent the world as we see it through our own eyes...
Please excuse the many typos in my previous post, I'm still trying to get used to a new keyboard. I'll try to do better this time.
ReplyDeleteOne thing I forgot to point out previously is that scientific instruments never use Bayer-filtered image sensors because of the extremely poor resulting information and the huge sensitivity penalty. This goes for all astronomical observation equipment (well, the few visible-light telescopes remaining, anyway), as well as other space exploration devices.
For example, all cameras aboard the many Moon and Mars landers are monochrome, yet equipped with filter wheels. When a color image is needed, a series of exposures are made, using various filters (seldom RGB, other bands are usually far more revealing).