Briefly explain why we need to be able to have less than 24-bit color and why this makes for a problem.
Generally, what do we need to do to adaptively transform 24-bit color values to 8-bit ones?
May not be able to handle such large file sizes or not have 24-bit displays.
The colors will be somewhat wrong, however.
We need to cluster color pixels so as to best use the bits available to be as accurate as possible for the colors in an image.
Exercise 2
Suppose we decide to quantize an 8-bit grayscale image down to just 2 bits of accuracy. What is the simplest way to do so? What ranges of byte values in the original image are mapped to what quantized values?
Just use the first 2 bits in the grayscale value.
I.e., any values in the ranges
0000 0000 to 0011 1111
0100 0000 to 0111 1111
1000 0000 to 1011 1111
1100 0000 to 1111 1111
are mapped into 4 representative grayscale values.
In decimal, these ranges are:
0 to (2ˆ6-1)
2ˆ6 to (2ˆ7-1)
2ˆ7 to 2ˆ7 + (2ˆ6-1)
2ˆ7 + 2ˆ6 to (2ˆ8-1)
i.e.,
0 to 63
64 to 127
128 to 191
192 to 255
Then reconstruction values should be taken as the middle
of these ranges; i.e.,
32
96
160
224
Exercise 3
Suppose we have available 24 bits per pixel for a color image. However, we notice that humans are more sensitive to R and G than to B — in fact, 1.5 times more sensitive to R or G than to B.