I'm looking for information (and references) on three related topics:
What causes banding noise in CMOS sensors? What is the physical/technological cause? Is the cause the same in CCD and CMOS sensors?
How do the various relevant factors (ISO setting, exposure time, and exposure level) influence the banding strength and pattern?
Is the band pattern stable in the short term (sequential shots) and in the long term? Is there a chance the band pattern of an actual sensor could be measured and used to reduce the effect in actual photos?
To clarify what I mean by banding, here's an example image from the TopazLabs website. Notice the horizontal bands in the noisy image.
A naive experiment didn't show any positive correlation between the banding in several images of a plain white surface.
Banding is caused by a number of factors.
Just as in colour management, any device along the imaging process can cause you to see banding. It could be caused by a badly calibrated monitor, a badly calibrated printer, a monitor not able to show a true 8-bit/12-bit LUT. From my experience, most banding is not actually inherent. It is caused by your monitor not being able to distinguish between discrete levels of gray on a fine gradient.
I don't really see how CMOS/CCD would make a difference. To understand this requires a bit of technical savvy, you need to understand that the most important factor that will influence banding is the bit depth of your sensor, this alone determines the amount of discrete tonal values your sensor can record. A 12-bit sensor will record 2^12 levels of gray while a 14-bit sensor will record 2^14 = 16384 levels of gray. Now, to complicate matters, most digital cameras do not assign a linear weight function to the tonal distribution of their sensors, this means that the camera (in fact most cameras) may be more biased to the highlights part of the spectrum (this is where the phrase "shoot to the right" comes from, with regard to histograms).
To further complicate matters, most sensor processors may do (definitely do) their own interpolation between sparse tonal values. This means, for example that if I have a tonal value of 5 in a particular part of the image, and a nearby area has a tonal value of 8, the camera will "guess" what the intermediate tonal value should be. ISO can also be a factor as it influences dynamic range and exposure latitude.
What you should be investigating is if the banding is displayed on more than one device. If you find that it is, you can try adding Gaussian noise with a blending mode and mask to get rid of it. I'll include some images to show you what I mean:
Now add a noise layer:
(1) Create new Layer
(2) Fill it with 50% Gray: Edit -> Fill -> 50% Gray
(3) Set the Blending Mode of that layer to Hard Light
(4) Add Noise: Filter -> Add Noise (Gaussian, 1% or so)
You can then play with the opacity of that layer until the banding disappears, you can also create a mask on it if you like.
Horizontal and Vertical Banding Noise (HVBN) is caused by sensor readout, downstream amplification, and ADC. There can be multiple sources of HVBN, some of them cause a relatively fixed pattern, others can cause random pattern. External signal interference is often a source of softer and more random banding. Exactly which causes banding in which sensors really depends, and no one but the manufacturer has enough information to point to the exact causes for any given camera.
Primarily, HVBN is caused by the way rows of pixels are activated, and each column for a row is read, and the nature of the transistors involved in that readout process. First, transistors manufactured via photolithography are imperfect. Imperfections in the base silicon, imperfections in the template and etching, etc. can all affect the response of transistors. As such, each pixel in a sensor, as well as buckets for on-die image processing such as CDS (Correlated Double Sampling), will not necessarily behave like all the rest, producing differences. In modern CMOS sensors (Sony Exmor type sensors excluded), on-die CDS circuitry is often a culprit for introducing banding noise at lower ISO settings (ISO 100 through maybe 800) in the deep shadows.
Some readout designs also involve an additional downstream amplifier used in certain circumstances, used in addition to the per-pixel amplifiers. Banding noise introduced within the sensor die itself will be exacerbated by any downstream amplifier. These kinds of amplifiers usually kick in at really high ISO, such as 6400 and higher, which is why relatively "clean" output at ISO 1600 and maybe 3200 suddenly becomes much worse at even higher settings.
Another source of banding is the ADC. There are potentially two culprits here. In the case of a camera like the 7D, which uses split parallel readout (where four readout channels are directed to one DIGIC 4 chip and another four are directed to another DIGIC 4 chip in an interleaved fashion), a fairly pronounced but even vertical banding can occur, even in the midtones, thanks to different response of the DIGIC DSP image processors that house four ADC units each. As even bands are sent to one DIGIC's ADC units, and odd bands are sent to the other DICIC's ADC units, 100% identical processing is unlikely, and slight differences manifest as vertical bands.
The final potential source is high frequency components. High frequency logic has a tendency to be noisy. Using the 7D again as an example, it is an 18 megapixel sensor, which a grand total of eight ADC units must process, at a speed fast enough to support an 8fps shutter rate. (Technically speaking, the 7D has even more than 18 million pixels...it is actually a 19.1 megapixel sensor, as Canon always masks off a border of pixels for bias offset and black point calibration.) At 8fps, total pixels processed per second must be at least 152,800,000, and since there are eight ADC units, each unit must process 19.1 million pixels each and every second. That requires a higher frequency, which can (via a variety of mechanisms I won't go into here) introduce additional noise.
There are ways HVBN can be reduced. Some sensor designs clip negative signal values from pixels (or, in other words, do not use a bias offset), which has the effect of halving banding, but also costing some potentially recoverable detail deep within the shadows of the image. Sensors that do use a bias offset (which allows negative signal values up to a preset level) tend to have more HVBN at lower ISO as less clipping is performed in order to support a larger full-well capacity. More advanced ADC design can reduce noise, some even utilize noise along with a form of dithering to nearly eliminate ADC introduced noise.
Another way banding noise can be reduced is by moving the analog signal to digital earlier, preferably on the sensor die itself. Digital data can be error corrected during transfer, where as analog signals tend to pick up noise the more they travel along electronic busses and through processing units. An increase in the number of ADC units improves parallelism, reducing the speed each unit must operate at, therefor allowing lower frequency components to be used. Better manufacturing techniques (usually afforded by a smaller fabrication process, which increases the room for more complex hardware) as well as better silicon wafers can be used to normalize the response curve for each transistor or logic unit, allowing them to produce cleaner results, even at higher frequencies.
Sony Exmor, the well-known nearly noise-free sensor in Nikon's D800 and D600 cameras, took a fairly radical approach to reducing the most intrusive and frustrating form of noise. Exmor moves the entire image processing pipeline up to and including the ADC onto the sensor die. It hyperparallelized the ADC, adding one per pixel column (CP-ADC, or column-parallel ADC). It eliminated analog per-pixel amplification and analog CDS in favor of digital amplification and digital CDS. It isolated high frequency components in a remote area of the sensor die, which nearly eliminated noise introduced by each ADC unit themselves. Pixel read results in immediate conversion from an analog charge into a digital unit, and it remain digital from that point on. Once digital, all information transfer is effectively noise free, as data transmission can be error corrected, thus guaranteeing proper transmission through buses and downstream image processors.
One of the big wins for Exmor (according to Sony) was the elimination of analog CDS circuitry, and a move to digital CDS logic. Sony's claim was that the differences in response for analog CDS units were a source of banding noise. Instead of storing the reset charge of each pixel as a charge, a "reset read" is performed, that reset read is run through the same ADC process as a normal image read with the exception that the digital output is tracked as negative values. When the actual exposure is read, it is read out as positive values, and the prior "negative" CDS read is applied inline (i.e. each pixel read out starts at some negative value, and counting increases from there). This eliminates noise both from non-uniform transistor response as well as from dark current simultaneously.
With an Exmor sensor, readout is effectively ISO-less (you may have heard that term elsewhere on the net). All ISO settings are achieved via a simple digital boost (digital amplification) to the appropriate level. For RAW, the ISO setting simply needs to be stored as metadata, and RAW editors boost each pixel value to the appropriate level during demosaicing. This is why an ISO 100 D800 shot can be underexposed, then lifted in post by many stops, without introducing banding noise in the shadows.