Preamble! I find that more and more displays, calibrated with the latest X-Rite or Datacolor colorimeters, offer much cleaner gradations, even on "low-end" displays like my Dell P2419H and yet with "normal" graphics cards and without hardware calibration, supposed to be the cream of the crop ! Material progress is therefore moving in the right direction...
Let's start with the bits in the images: 8 or 16 bits?
As we have seen on the page dedicated to the human eye and colours, a human eye with a very beautiful visual acuity is able to distinguish two hundred shades for each primary RGB colour. As our view is based on an RGB mixing model, it gives us eight million possibilities, so the famous L*a*b* color space. Thus, to display these colors, we need a byte-based computer model (8 bits) because it allows 256 values per primary color. (In 7 bits we would have had only 128 possibilities). So we end up with sixteen million possible combinations when we see only eight million colours at most! To be displayed correctly, an image therefore does not need to be in 16 bits. 8 bits are more than enough. But then what are 16-bit images for? Well, only during the retouching phase. Each primary color is then described on 65536 levels, profile conversion, levels and other saturations will be done with such a level of precision that losses will be largely minimized.
Key point! An image to be displayed correctly does not need to be in 16 bits. 8 bits are enough. On the other hand, the risk of damage will be greatly reduced on a 16-bit image during the various retouching operations (when the thresholds are significantly tightened several times in a row, for example). The display quality of a gradient even in 8 bits can therefore be perfect. It is elsewhere that the quality of display is determined...
8, 10, 14 or 16 bits.... for the LUT table on the screen ?
The LUT tables (tables for converting the received signal - that of the image - to the signal sent to the screen) of the screens can therefore be in 8, 14 or even 16 bits. But what for ? It's all a matter of rounding errors during conversions! Let's see it now...
Vocabulary for a good understanding - Some technique to start with : an RGB signal from your photo should be sent to your graphics card and then to your screen to be displayed correctly, if possible without loss. However, it can be damaged during its transmission from one to the other. The witnessing of the original signal (that of your digital file), is done in what is called a LUT table. There is a LUT table in your graphics card and another in your screen. LUT is an acronym for conversion table. However, the problem is very simple: when an RGB signal from your file is sent to another device - here to your graphics card and then to your screen - and since we work digitally, therefore in bit and not continuously (we work on 256 levels), there can be approximations in the passage of cookies during what is called conversion. The source file has a given RGB value and this RGB value must be slightly modified by the destination device (for different reasons that I explain on my conversion page). It is during this conversion calculation which can have approximation errors that we also call rounding errors and which will materialize by... these famous tonal breaks. Our source file contains a beautiful gradient of sky blue and yet it appears with tone breakings on the screen...
Key point ! Note that we are talking about beautiful image files and not overprocessed images, so they are necessarily full of real tone breakings, and so there is no reason not to appear as such on the screen ! We are talking about not adding breakings to a file that did not contain any.
To avoid this as much as possible, manufacturers install LUT tables in their displays that work in 10 bits (or more) and not in 8 bits traditionally on 1024 levels instead of 256 in order to "smooth" these losses. But since marketing goes through this and 10-bit LUT tables have been around for more than five years, we are now talking about 16-bit LUT tables. And why not in 32 bits while we' re at it !!!! The objective being just to make the rounding error tiny, this objective can already be achieved in 10 bits and a fortiori in 14 bits so the 16 bits is a marketing argument.
My opinion: indeed, screens with 10-bit or more LUT table always display very progressive gradients from "clean" files (with under-exposed and too brightened photos, you will have tone breaks even on these luxury hardware configurations). However, the gradients of a recent iMac are very beautiful too and yet in 8 bits! As for the LUT tables with more than 10 bits screen, you now know what I think.... A 10-bit display is always good because it is synonymous with the manufacturer's desire for quality, but you can also find very beautiful 8-bit displays. The 10-bit improves at the margin so never expect something spectacular. It is often necessary to get closer to the screen as the conversion errors are low when working on beautiful files and on recent screens correctly calibrated.
And finally the 10-bit display of the graphics cards?
This deserved a specific page: