aboutsummaryrefslogtreecommitdiffhomepage
path: root/DOCS/tech/colorspaces.txt
blob: dc00fd388d7d6b3ec2ef5f561e08584373e8566c (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
In general
==========

There are planar and packed modes.
- Planar mode means: You have 3 separate images, one for each component,
each image 8 bits/pixel. To get the real colored pixel, you have to
mix the components from all planes. The resolution of planes may differ!
- Packed mode means: you have all components mixed/interleaved together,
so you have small "packs" of components in a single, big image.

There are RGB and YUV colorspaces.
- RGB: Red, Green and Blue components. Used by analog VGA monitors.
- YUV: Luminance (Y) and Chrominance (U,V) components. Used by some
  video systems, like PAL. Also most M(J)PEG/DCT based codecs use this.

With YUV, they used to reduce the resolution of U,V planes:
The most common YUV formats:
FOURCC:    bpp: IEEE:      plane sizes: (w=width h=height of original image)
444P       24   YUV 4:4:4  Y: w * h  U,V: w * h
YUY2,UYVY  16   YUV 4:2:2  Y: w * h  U,V: (w/2) * h      [MJPEG]
YV12,I420  12   YUV 4:2:0  Y: w * h  U,V: (w/2) * (h/2)  [MPEG, H.263]
411P       12   YUV 4:1:1  Y: w * h  U,V: (w/4) * h      [DV-NTSC, CYUV]
YVU9,IF09   9   YUV 4:1:0  Y: w * h  U,V: (w/4) * (h/4)  [Sorenson, Indeo]

The YUV a:b:c naming style means: for <a> samples of Y there are <b> samples
of UV in odd lines and <c> samples of UV in even lines.

conversion: (some cut'n'paste from www and maillist)

RGB to YUV Conversion:
    Y  =      (0.257 * R) + (0.504 * G) + (0.098 * B) + 16
    Cr = V =  (0.439 * R) - (0.368 * G) - (0.071 * B) + 128
    Cb = U = -(0.148 * R) - (0.291 * G) + (0.439 * B) + 128
YUV to RGB Conversion:
    B = 1.164(Y - 16)                  + 2.018(U - 128)
    G = 1.164(Y - 16) - 0.813(V - 128) - 0.391(U - 128)
    R = 1.164(Y - 16) + 1.596(V - 128)

In both these cases, you have to clamp the output values to keep them in
the [0-255] range. Rumour has it that the valid range is actually a subset
of [0-255] (I've seen an RGB range of [16-235] mentioned) but clamping the
values into [0-255] seems to produce acceptable results to me.

Julien (sorry, I can't recall his surname) suggests that there are
problems with the above formula and proposes the following instead:
    Y  = 0.299R + 0.587G + 0.114B
    Cb = U'= (B-Y)*0.565
    Cr = V'= (R-Y)*0.713
with reciprocal versions:
    R = Y + 1.403V'
    G = Y - 0.344U' - 0.714V'
    B = Y + 1.770U'
Note: This formula doesn't contain the +128 offsets of U,V values!

Conclusion:
Y = luminance, the weighted average of R G B components. (0=black 255=white)
U = Cb = blue component (0=green 128=grey 255=blue)
V = Cr = red component  (0=green 128=grey 255=red)


Huh. The planar YUV modes.
==========================

The most misunderstood thingie...

In MPlayer, we usually have 3 pointers to the Y, U and V planes, so it
doesn't matter what the order of the planes in the memory is:
    for mp_image_t and libvo's draw_slice():
	planes[0] = Y = luminance
	planes[1] = U = Cb = blue
	planes[2] = V = Cr = red
    Note: planes[1] is ALWAYS U, and planes[2] is V, the FOURCC
    (YV12 vs. I420) doesn't matter here! So, every codec using 3 pointers
    (not only the first one) normally supports YV12 and I420 (=IYUV), too!

But there are some codecs (VfW, dshow) and vo drivers (xv) ignoring the 2nd
and 3rd pointer that use only a single pointer to the planar YUV image. In
this case we must know the right order and alignment of planes in the memory!

from the webartz fourcc list:
YV12:  12 bpp, full sized Y plane followed by 2x2 subsampled V and U planes
I420:  12 bpp, full sized Y plane followed by 2x2 subsampled U and V planes
IYUV:  the same as I420
YVU9:   9 bpp, full sized Y plane followed by 4x4 subsampled V and U planes

Huh 2. RGB vs. BGR ?
====================

The 2nd most misunderstood thingie...

You know, there are Intel and Motorola, and they use different byteorder.
There are also others, like MIPS or Alpha, but all follow either Intel
or Motorola byteorder.
Unfortunately, the packed colorspaces depend on CPU byteorder. So, RGB
on Intel and Motorola means different order of bytes.

In MPlayer, we have constants IMGFMT_RGBxx and IMGFMT_BGRxx.
Unfortunately, some codecs and vo drivers follow Intel, some follow Motorola
byteorder, so they are incompatible. We had to find a stable base, so long
time ago I've chosen OpenGL, as it's a wide-spreaded standard, and it well
defines what RGB is and what BGR is. So, MPlayer's RGB is compatible with
OpenGL's GL_RGB on all platforms, and the same goes for BGR - GL_BGR.
Unfortunately, most of the x86 codecs call our BGR RGB, so it sometimes
confuses developers.

memory order:           name
lowest address .. highest address
RGBA                    IMGFMT_RGBA
ARGB                    IMGFMT_ARGB
BGRA                    IMGFMT_BGRA
ABGR                    IMGFMT_ABGR
RGB                     IMGFMT_RGB24
BGR                     IMGFMT_BGR24

order in an int         name
most significant .. least significant bit
8A8R8G8B                IMGFMT_BGR32
8A8B8G8R                IMGFMT_RGB32
5R6G5B                  IMGFMT_BGR16
5B6G5R                  IMGFMT_RGB16
1A5R5G5B                IMGFMT_BGR15
1A5B5G5R                IMGFMT_RGB15

The following are palettized formats, the palette is in the second plane.
When they come out of the swscaler (this can be different when they
come from a codec!), their palette is organized in such a way
that you actually get:

3R3G2B                  IMGFMT_BGR8
2B3G3R                  IMGFMT_RGB8
1R2G1B                  IMGFMT_BGR4_CHAR
1B2G1R                  IMGFMT_RGB4_CHAR
1R2G1B1R2G1B            IMGFMT_BGR4
1B2G1R1B2G1R            IMGFMT_RGB4

Depending upon the CPU being little- or big-endian, different 'in memory' and
'in register' formats will be equal (LE -> BGRA == BGR32 / BE -> ARGB == BGR32)

Practical coding guide:

The 4, 8, 15, and 16 bit formats are defined so that the portable way to
access them is to load the pixel into an integer and use bitmasks.

The 24 bit formats are defined so that the portable way to access them is
to address the 3 components as separate bytes, as in ((uint8_t *)pixel)[0],
((uint8_t *)pixel)[1], ((uint8_t *)pixel)[2].

When a 32-bit format is identified by the four characters A, R, G, and B in
some order, the portable way to access it is by addressing the 4 components
as separate bytes.

When a 32-bit format is identified by the 3 characters R, G, and B in some
order followed by the number 32, the portable way to access it is to load
the pixel into an integer and use bitmasks.

When the above portable access methods are not used, you will need to write
2 versions of your code, and use #ifdef WORDS_BIGENDIAN to choose the correct
one.