Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
Basic Image Processing 2
Basic Image Processing 2
Digital Media Computing
COSC2271 (UG), COSC2272 (PG)
Dr. Ron van Schyndel    
Basic Image Processing 2
Digital Media Computing 2© 2003-2011 Ron van Schyndel, RMIT University
Lecture Overview
 …from last lecture
 Image Processing Definitions
 Image Processing in AWT
 The MVC pattern (Image producer / consumer / observer / filter)
 Point Operations – Single Pixel Filters
 Image Histograms
 Transfer Curves
 Geometrical Operations
 This lecture
 Neighbourhood Operations - Multiple Pixel Filters
 Convolutions
 Median and other statistical 'order' filters
 Basic Colour Theory
 Colour Models: RGB, CMYK, HSB, Lab/Luv, YCbCr/YUV/YIQ, 
Basic Image Processing 2
Digital Media Computing 3© 2003-2011 Ron van Schyndel, RMIT University
Multiple Pixel Filters
 Consider neighbours in creating new value. Essentially it is an 
operation with 2 images to create a third.
 Another name given to this filter type is a Box or Spatial Filter and the 
process is sometimes termed Convolution.
 Most common method uses a 3x3 pixel pattern.
 Larger patterns like 5x5 and 7x7 are also used.
 The centre pixel value is calculated by a formula which includes old 
and new values of pixel and neighbours.
 The standard method is to multiply a coefficient by old pixel values, 
sum the results and save as pixel in new image.
Basic Image Processing 2
Digital Media Computing 4© 2003-2011 Ron van Schyndel, RMIT University
1D Neighbourhood Operations
1 1 1
Σ ÷ 3
The Running Average is a 
neighbourhood operation in 1D.  
Other 2 operations are:
Non-weighted
average mask
-1 0 +1End-weighted 
difference mask
1D example: Running Average
0 1 0unit mask
xxx
1 2 1centre-weighted 
mask ÷ 4
Original Data Stream
Averaged Data Stream
10  20  30  20  20  20  10  50  10  20
20  23  23  20  16  26  23  26
Value is unchanged 
20  25  22  20  17  22  30  22
20   0 -10   0 -10  30   0 -30
?
=
Peak has 
completely 
disappeared
Zero since 
no gradient 
Basic Image Processing 2
Digital Media Computing 5© 2003-2011 Ron van Schyndel, RMIT University
Σ
2D Neighbourhood Operations
Like the 1D running average, each entry in the 2D 
original image is multiplied by a corresponding entry in 
the 2D mask and the result summed as shown below.
This process is called convolution.
pold mask pnew
( ) ( ) ( )∑
−=
−−=
n
nji
oldnew jimaskjyixpyxp
,
,,,
Basic Image Processing 2
Digital Media Computing 6© 2003-2011 Ron van Schyndel, RMIT University
3 x 3 Filtering Process
Dark pixels 
covered by cross
0 pixels
1 pixel
2 pixels
3 pixels
4 pixels
5 pixels
This diagram shows image 
convolution with a 5-pixel 
cross.  The lower diagram 
shows how many pixels of 
the T matched in different 
regions.
5 Pixels 
covered
2 Pixels 
covered
Note how dest 
has 2 fewer 
rows and cols
Basic Image Processing 2
Digital Media Computing 7© 2003-2011 Ron van Schyndel, RMIT University
3 x 3 Convolution Algorithm
double sum;
int maskwidth, maskheight, i, j, x, y;
int image[][], mask[][];
double factor;
double convolOutput[][];
// ...integer divide
int mw2 = maskwidth/2;   int mh2 = maskheight/2;
for (x = mw2; x < width-mw2; x++)
for (y = mh2; y < height-mh2; y++)
sum = 0.0;
for (i = -mw2; i < mw2; i++) {
for (j = -mh2; j < mh2; j++)
sum += (double) image[x+i][y+j] * mask[mw2-i][mh2-j];
}
}
convolOutput[x-mw2][y-mh2] = sum / factor;
}
} ( ) ( ) ( )∑
−=
−−=
n
nji
oldnew jimaskjyixpyxp
,
,,,
Basic Image Processing 2
Digital Media Computing 8© 2003-2011 Ron van Schyndel, RMIT University
2D Neighbourhood Filters
 To modify image according to neighbours but retain the 
same overall level of image information then the total of all 
values should equal 1.
 Coefficients can be altered. Values shown are just typical 
values, experimentation is needed to obtain the 'best' 
values for different images.
Lena
128x128 image
Basic Image Processing 2
2D Neighbourhood Filters
Unity
 Used for testing code
 Output = Input
Replication
 An image occurs at any 
position containing a 1
Robert’s Edge
 A +ve and –ve image 
occurs at position 
containing a 1,-1
Digital Media Computing 9© 2003-2011 Ron van Schyndel, RMIT University
0 0 0
0 1 0
0 0 0
1 0 0
0 1 0
0 0 0
-1 0 0
0 1 0
0 0 0
Basic Image Processing 2
Digital Media Computing 10© 2003-2011 Ron van Schyndel, RMIT University
2D Neighbourhood Filters
Sharpening
 Centre value is 
positive and 
surrounding 
neighbours are 
negative. This will 
highlight differences 
between a pixel and 
its neighbours.
0 -1 0
-1 5 -1
0 -1 0
Too much 
sharpening !
Basic Image Processing 2
Digital Media Computing 11© 2003-2011 Ron van Schyndel, RMIT University
2D Neighbourhood Filters
Blurring
 Merge with 
neighbours
 Amounts for each pixel are not fixed and smaller and 
larger effects are possible by varying values.
e.g. Less blurring:
The same applies to sharpening
1/9 1/9 1/9
1/9 1/9 1/9
1/9 1/9 1/9
0 1/8 0
1/8 1/2 1/8
0 1/8 0
Basic Image Processing 2
Digital Media Computing 12© 2003-2011 Ron van Schyndel, RMIT University
Unsharp Masking
A method of making an image look sharper without 
making the sharpening look obvious.  Basic idea is 
Original        =  Blurred Image  +   Image Edges
Rearrange equation.
Image Edges  =  Original        – Blurred Image
-10 10 10 -10 =  60 90 90 60 – 70 80 80 70       +
Sharp Image   =  Original        +    Image Edges
50 100 100 50 =  60 90 90 60  +   -10 10 10 -10
So we get the edges by blurring an original, then add 
those edges to the original, which now has 2x edges.  
Note that NO NEW information has been added.
Basic Image Processing 2
Digital Media Computing 13© 2003-2011 Ron van Schyndel, RMIT University
Edge Detection (Image Analysis)
These are filters that extract edge information from an 
image. The total of the weights equals zero..
-1 -1 -1
-1 8 -1
-1 -1 -1
Mathematically this 
is known as a 
Laplacian Filter
Consider a group of pixels with equal values. The output 
will equal zero. This is correct as there are no edges.
If the pixels have differing values then there will be a non-
zero output for the centre pixel.
Basic Image Processing 2
Digital Media Computing 14© 2003-2011 Ron van Schyndel, RMIT University
Edge Detection
 The output from the algorithm has zero for regions of flat 
colour and non-zero values where differences exist 
between adjacent pixels.
 The non-zero values are often quite small.
 The more obvious edges in an image have bigger colour 
differences and result in larger output values.
 A post-processing step may be performed where all values 
below a threshold are reduced to zero and all values 
above the threshold are raised to the maximum, 255.
 This will extract the more major edges. The threshold is 
often altered inter-actively until the 'best' value for the 
image being analysed is found.
 A simple method to draw edges over an image is to build a 
new image with full transparency for all zero pixel values 
and no transparency for edges. Edge colour to depend on 
the image but white, black, red, orange are all potential 
highlight colours.
Basic Image Processing 2
Digital Media Computing 15© 2003-2011 Ron van Schyndel, RMIT University
Edge Detection (Directional)
 Edge detection can be directional and a wide variety of 
edge filters are used in practice. The following box will find 
vertical edges that are two pixels wide but ignore 
horizontal edges.
North-East East (also SE, S, SW, W, NW, N)
-1 0 +1
-2 0 +2
-1 0 +1
These 8 are known as 
Sobel Directional 
Edge Enhancements
 Filters can be created for finding horizontal and diagonal edges. 
Results can be combined with other edges and other filters for a 
variety of effects.
 One extensive use of this filter type is for car number plate detection 
from photographs
0 +1 +2
-1 0 +1
-2 -1 0
Basic Image Processing 2
Digital Media Computing 16© 2003-2011 Ron van Schyndel, RMIT University
Edge Detection - Examples
Use H/V Sobel Edges and replace with max of the 2.
Original ‘Lena’
Horiz Signed
Vert Unsigned
Max(H,V) Unsigned
Basic Image Processing 2
Digital Media Computing 17© 2003-2011 Ron van Schyndel, RMIT University
Combining Edge Detection and Pixel Remapping
import java.awt.*;
import java.awt.image.*;
import java.awt.geom.*;
import javax.imageio.*;
import java.io.*;
public class CropImage4a extends Frame
{
private BufferedImage im, crop, edgeCrop, lutEdgeCrop;
private int w = 100, h = 100;      // WARNING: hardcoded image dims
float[] edgeKernel = { 0.0f,  -1.0f,   0.0f,
-1.0f,   4.0f,  -1.0f,
0.0f,  -1.0f,   0.0f  };
// this code creates, draws a cropped image and an edge image
public CropImage4a(String[] str)
{  try {      // file operations can generate IOExceptions
int[] pixels = new int[w*h];
im  = ImageIO.read(new File(str[0]));
im.getRGB(110, 5, w, h, pixels, 0, w);
// At this point, we can program the edge detect manually on pixels array, 
// or use Java 2D.
Basic Image Processing 2
Digital Media Computing 18© 2003-2011 Ron van Schyndel, RMIT University
Note
Combining Edge Detection and Pixel Remapping
crop = new BufferedImage(w, h, BufferedImage.TYPE_INT_RGB);
crop.setRGB(0, 0, w, h, pixels, 0, w);
Kernel kernel = new Kernel(3, 3, edgeKernel);
ConvolveOp cop = new ConvolveOp(kernel, 
ConvolveOp.EDGE_ZERO_FILL,null);
edgeCrop = cop.filter(crop,null); // do it
short [] x4 = new short[256];
for (int i = 0; i < 256; i++)
x4[i] = (short)((i > 63) ? 255 : 4*i);
LookupOp lop = new LookupOp(new ShortLookupTable(0,x4),null);
lutEdgeCrop = lop.filter(edgeCrop,null); // do it
} catch (IOException e) {e.printStackTrace();}
}
public void paint(Graphics g) {
g.drawImage(im, 10, 10, this);
g.drawImage(edgeCrop, im.getwidth(this)+10, 10, this);
g.drawImage(lutEdgeCrop,
im.getwidth(this)+crop.getwidth(this)+20, 10, this);
}
Basic Image Processing 2
Digital Media Computing 19© 2003-2011 Ron van Schyndel, RMIT University
Combining Edge Detection and Pixel Remapping
// provide a parameter which is image filename
public static void main(String[] str)
{  if (str.length == 0)  System.exit(0);
Frame f = new CropImage4a(str);
f.setSize(1200,900);
f.setVisible(Boolean.TRUE);
}}
 This program uses a ConvolveOp and a supplied Kernel to perform an 
edge detection.  It then uses LookupOp to create a lookup table and fill it 
with LUT[i] = i*4.  This table is used to remap each pixel.  This increases 
the intensity fourfold, allowing dim edges to be easily seen.
Basic Image Processing 2
Digital Media Computing 20© 2003-2011 Ron van Schyndel, RMIT University
Median Filter
 Instead of using a mathematical combination, this filter 
selects the MEDIAN value of a group of pixels.  It is a 
statistical estimator.
 This is very useful for removing noise from an image. Noise could 
result from dust on a scan or in a sound file from poor quality 
recording equipment. It is relatively insensitive to outlier pixels, 
since outliers will not change the sort sequence.
 Example - given the pixel values shown 
(the sample is a uniform colour with 
some noise of both extremes)
Sort    0, 10, 10, 30, 50, 50, 50, 255, 255
the 5th (median) value is 50
thus the centre pixel in the new image is 50
10 10 50
30 0 50
50 255 255
Basic Image Processing 2
Digital Media Computing 21© 2003-2011 Ron van Schyndel, RMIT University
Median Filter (removing the sort)
Sort    0, 10, 10, 30, 50, 50, 50, 255, 255
the 5th value is 50
thus the centre pixel in the new image is 50
 Sorting is a very expensive activity.  If the number of values to be 
sorted, and the domain is small, we can do a histogram sort.
 Create a 256 element array
hist = 0, 0, 0, … ,0;  // initialise 256 elements to 0
 For every pixels = p, increment the pth element of the histogram 
.
hist[ 0] = 1; hist[10] = 2;
hist[30] = 1; hist[50] = 3; hist[255] = 2;
 When finished, scan from 0 until count exceeds 
half of all pixels counted.  The index is then the median.
for (cum = 0, i = 0; cum < 5; i++)
cum += hist[i];
10 10 50
30 0 50
50 255 255
Basic Image Processing 2
Digital Media Computing 22© 2003-2011 Ron van Schyndel, RMIT University
Median Filtering (code fragment)
public CropImage5(String[] str)
{  try // file operations can generate IOException
{  int cum, i; 
int[] pixels = new int[9];
int[] hist = new int[256];
im  = ImageIO.read(new File(str[0]));
crop = new BufferedImage(im.getWidth(this), 
im.getHeight(this), BufferedImage.TYPE_INT_RGB);
for (int x = 1; x < im.getWidth(this)-1; x++) {
for (int y = 1; y < im.getHeight(this)-1; y++) {
im.getRGB(x-1,y-1, 3,3, pixels, 0, 3);
// sort pixels, find middle value pixels[5]
for (i = 0; i < 256; i++)
hist[i]=0;
for (i = 0; i < 9; i++)
hist[(pixels[i]>>8) & 0xff]++;     // Green Only
for (cum = 0, i = 0; cum < 5; i++) 
cum += hist[i];  
crop.setRGB(x, y, ((i<<16)|(i<<8)|i)); // R=G=B
}
}
} catch (IOException e) { e.printStackTrace(); }
}
Basic Image Processing 2
Order Filter
 The median filter can be generalised by choosing another 
value instead of the 5th.  
 These are generally called Order Filters
 The Min, Median and Max values are special cases
 Min:  (also called Oil-painting filter) generally darkens and blurs
 Also useful for emphasizing tiny details that might get lost otherwise
 It ‘fattens’ thin lines
 Max: (also called Watercolour filter) generally lightens and blurs
 Also useful for emphasizing overall colours removing tiny details
 Note that if the image is reversed (light on dark 
background), the effect of the above also reverses
Digital Media Computing 23© 2003-2011 Ron van Schyndel, RMIT University
Basic Image Processing 2
Digital Media Computing 24© 2003-2011 Ron van Schyndel, RMIT University
Image Transformations
 Images can be rotated, scaled and transformed with the 
use of homogenous transformations using matrix 
multiplication.
 For 2 D (x and y), a 3 x 3 matrix is used and for 3D (x, y 
and z) a 4 x 4 matrix applies.
 These matrices can also be used for altering perspective, 
shearing an image and other advanced transformations 
like fish eye lens projections, warping and morphing, 
covered later in this course.
Basic Image Processing 2
Digital Media Computing
COSC2271 (UG), COSC2272 (PG)
Colour
Dr. Ron van Schyndel    
Basic Image Processing 2
26
CIE 1931 Chromaticity Diagram
The CIE is short for Commission 
Internationale de l'Eclairage -
International Commission on 
Lighting (or Illumination).  
The lighting standard at left was 
defined in 1931 and is one of 
the reference standard used for 
scientific and engineering 
purposes.
We can now easily describe 
colour using  linear interpolation 
between three points: Red, 
Green, Blue.
The spectrum is twisted 
into a curve so that we can 
model magenta/purple.
The range of colours 
possible by linear 
interpolation of spectral 
red, green and blue.
Basic Image Processing 2
27
CIE 1931 Chromaticity Diagram
Range of visible colours.
• RGB monitor colours lie within the 
white triangle above.
• Printer colours lie within the Black 
‘triangle’.
• Illuminant C is an approximation 
of the colour of daylight.
• Grassman’s law: Colours are 
complementary if their sum is 
close to the illuminant colour (so 
C1 is nearly complementary to C2).
• Notice that both C1 and C2 are 
out-of-gamut.  To make them in-
gamut, you interpolate towards 
the illuminant C (eg   ).
If the monitor phosphors 
were completely 
monochromatic, these 
circles would lie on the 
curve
Basic Image Processing 2
28
CIE 1976 Chromaticity Diagram
Comparing Colours.
(1976 diagram uses slightly different scale axes)
• Colour descriptions are somewhat 
approximate since their perception is strongly 
affected by the prevailing light colour.
• Comparing colours in two scenes can be 
achieved by:
• Comparing the colour of a sheet of white 
paper in each scene.  
• Adjusting the colours in each scene 
along the line shown until the sheets are 
the same colour.
• Assuming that lighting is uniform across 
the scene or changes identically in each 
scene.
The first pictures of Mars’ surface showed 
a blue-ish sky until a colour chart 
mounted on the side of the lander was 
used to calibrate the colours - yielding the 
orange-pink sky we are now familiar withhttp://www.goodmart.com/facts/light_bulbs/color_temperature.aspx
Warm Mid-range Cool 
(2000-3000K) (3000-4000K) (4000K +)
http://hyperphysics.phy-astr.gsu.edu/hbase/vision/imgvis/Ciebbody.jpg
v’
u’
Basic Image Processing 2
29
Colour Model - RGB
 If we shine a red light, our red-sensitive cones will pick it up.  If we also 
shine a green light, our green sensitive cones will react.  So now both 
red and green cones are reacting.  We cannot tell the difference 
between this and a coloured light with mostly red and green in it.
 This is why monitors using RGB can create 
images indistinguishable from the real image.
 We can model this with image pixels containing three components – R, 
G and B.
 This RGB model is called additive colour synthesis and is used when the default 
background is black, and 100% of each colour equals white.
 Black is RGB of  0, 0, 0, white is 255, 255, 255
 Generally 8 bits are used for each colour component. This allows 256 shades of 
each colour. Actual values range from 0 to 255.
 Total number of colours is 256*256*256 which equals 16,777,216 
colour shades in total. This is normally called 16 million colours.
 While with our eyes, we cannot distinguish the 16 million shades, humans can 
actually see a greater range of colours than what can be shown on a RGB monitor.
+ ==
Basic Image Processing 2
30
The RGB Colour Cube
G
R
B
Gray Colour down the diagonal.
Quantising the RGB 
colours
By setting up ratios 
between the three RGB 
values, we get a cube of 
possible data points.
In general if each of the 
RGB values has N 
possible values, then 
there will be N3 possible 
colour combinations.R
Red
White
Green
Cyan
Magenta
Yellow
Blue
Basic Image Processing 2
31
Colour Pixels Closeup – All
Basic Image Processing 2
32
Colour Pixels Closeup – None (0%)
Basic Image Processing 2
33
Colour Pixels Closeup – Red
Basic Image Processing 2
34
Colour Pixels Closeup – Green
Basic Image Processing 2
35
Colour Pixels Closeup – Red+Green=Yellow
Basic Image Processing 2
36
Colour Pixels Closeup – Yellow+Blue=White
Basic Image Processing 2
37
Colour Pixels Closeup – White (50%) = Gray
Basic Image Processing 2
38
Colour Model - CMYK
A simple model to describe the reflection effects is the CMYK subtractive 
synthesis. 
In this model, printers use inks with very specific SPD. 
– Cyan(C) / Magenta(M) / Yellow(Y) / Black(K)
CMYK is called subtractive as the background is white and we subtract colours 
from white for display.
In theory
 C = 1 - R
 M = 1 - G
 Y = 1 - B
In practice
100% of each CMY does not produce Black but instead produces a dark 
brown. To overcome this problem black is introduced and is defined as equal 
amounts of CMY. This process is called undercolour removal.
Hence (continuing above equations)
 K = minimum of (C, M Y) = ‘undercolour’
 C = C - K
 M = M - K
 Y = Y - K
Basic Image Processing 2
39
Colour Model – RGB / CMYK
Emission Reflection
Basic Image Processing 2
40
RGB Colour Model
 The additive synthesis model refers to the resulting colour 
of a mixture of emitted light sources 
 The subtractive model refers to the resulting colour of a 
mixture of inks absorbing light.
 We can combine these into a final colour estimate:
Final-Colour = Emissive_colourset – Subtractive_colourset
= (colour of the light source) – (colour of object in white 
light)
 Example:
 What is the displayed colour of a blue object in yellow light?
 Light = yellow = R+G
 Object = B = 1 – (R+G) ie blue absorbs R and G
 Result_colour = R+G – B = R+G – (R+G) = 0, black
Basic Image Processing 2
41
Dye = Yellow,
• blue is absorbed
Dye = Cyan,
• red is absorbed
Dye = Yellow & Cyan,
• blue and red are both
absorbed
Dye = all three,
• Blue, red and green are 
absorbed, but the dyes 
are not perfect, so a 
brown, muddy colour 
remains.
Dye = all three + black,
• Now the result is a true black
Colour Model – Subtractive
Basic Image Processing 2
42
Other Colour Models
Many colour models exist. Some like YIQ and YCbCr, have been created 
specially for television transmission.  These are used for JPEG and 
MPEG file.  A simpler one, to get the idea, is
HSV (= Hue/Saturation/Value  - HSB, HLS are very similar)
Instead of using a colour system which is hardware based, this one is 
user-oriented and is based on the familiar artistic ideas of tint, shade 
and tone.
Hue
represents the colour shade (measured in degrees on a colour wheel)
Saturation
represents the purity of a colour or the mix of the colour and white
Value
is the intensity or luminosity of the colour (brightness)
Basic Image Processing 2
43
HSV representation
Basic Image Processing 2
44
0
64
128
192
C olour
0
64
128
192
C olour
0
64
128
192
C olour
Base = 128, 
H(128,0,0) = 0o,
S = Max-Min = 128/256,
V = 255.
ColSat = 50% W / 50% C
Base = 64, 
H(192,0,192)=300o,
S = 192/256=0.75,
V = 255.
ColSat = 25% W / 75% C
Base = 32, 
H(0,96,223)=214o,
S = 223/255=0.87,
V = 255.
ColSat = 13% W / 87% C
HSV Quick Guide
Basic Image Processing 2
45
HSB Conversion in Java
As would be expected, Java provides methods for some common 
conversions.
The Colour class provides some static methods for converting between 
RGB and HSB
static int HSBtoRGB(float h, float s, float b)
static float [] RGBtoHSB(int r, int g, int b, float[] hsb)
These perform the same calculation as the HSV algorithms except the 
hue (h) value ranges from 0 to 1, instead of 0 to 360 degrees.
The API documentation explains how the hue value is handled in the 
methods. (Point your browser to index.html under the Docs directory in 
your Java installation)
Basic Image Processing 2
46
La*b* and Lu*v* Color spaces
 The color spaces La*b* and Lu*v* are very similar and 
were created this way as a consequence of the color 
opponency theory. 
 The diagram at right shows the typical values of L, a* and 
b*, where L is the luminosity and a b are two colour 
parameters.
 These scales are accurate from a colorimetric point of 
view.
Basic Image Processing 2
47
YCbCr as used in JPEG/MPEG files
One commonly used colour space is
Y (Luminance) 
Cb (Chrominance B – Y) 
Cr (Chrominance R - Y).  
This is defined in the CCIR-601-2 
standard for digital video (three-signal, 
studio quality), and is also used in JPEG and MPEG files.
The luminance component Y is almost the same as the value in HSV or 
the Luminance in La*b*.  This means that all the colour information is in 
the Cb and Cr components.   These are known as the chrominance
components.
Since humans are much less sensitive to changes in colour compared to 
changes in brightness, it is possible to compress the CbCr components 
much more than Y.
Basic Image Processing 2
48
YCbCr is conceptually identical to YUV and YIQ. The difference is in 
scaling.
R
B
G
White
Black
CbCr
Y
Conceptually, YCbCr can 
be related to RGB as 
follows:
Y represents a distance 
along the diagonal line 
connecting black to white 
in the RGB colour cube, 
and Cr,Cb represent a 
coordinate axis 
perpendicular to Y as 
shown.
YCbCr as used in JPEG/MPEG files
Basic Image Processing 2
49
RGB to YCbCr
 Given R, G, B, Y, Cr and Cb are all integers from 0 to 255, 
RGB to YCbCr conversions is defined in matrix form as 
follows:
 which can be re-expressed as










+




















−−
−−=










128
128
16
1894112
1127438
2512966
256
1
B
G
R
C
C
Y
r
b
128   )/25618–  94–  (112
128  )/256112  74–  (-38
16  )/25625  129  (66
+
++
+++
=
=
=
BGR
BGR
BGR
C
C
Y
r
b
For you graphics people, 
think of this as a scaled 
rotation matrix
Basic Image Processing 2
50
YCbCr to RGB 
 YCbCr to RGB conversions is defined in matrix form as 
follows:
 and this can be re-expressed as




















−
−
−
+




















−
−=










128
128
16
0516289
208100298
4080298
256
1
r
b
C
C
Y
B
G
R
B
G
R
( ) ( )[ ]
( ) ( ) ( )[ ]
( ) ( )[ ]/256128516- 16-982
/256128208–  128100- 16-982
/256128408  16-982
−
−−
−+
=
=
=
b
rb
r
CY
CCY
CY
B
G
R
Basic Image Processing 2
51
So you think you can invert a Colour??
 Given a color, which below of the ones is its inverse?
Colour (Human Skin)
R: 255 H:   21
G: 153 S: 255
B:   51 L: 153
RGB (255 - R,G,B)
R:     0 H: 149
G: 102 S: 255
B: 204 L: 102
HSL (H += 128, L=266-L)
R:     0 H: 149
G: 104 S: 255
B: 208 L: 104
HSL (H += 128 only)
R: 255 H: 149
G: 153 S: 255
B:   51 L: 153
RGB (R=(G+B/2), G=...)
R: 102 H: 149
G: 153 S: 128
B: 204 L: 153
Basic Image Processing 2
Digital Media Computing 52© 2003-2011 Ron van Schyndel, RMIT University
Lecture Summary
 …from last lecture
 Image Processing Definitions
 Image Processing in AWT
 The MVC pattern (Image producer / consumer / observer / filter)
 Point Operations – Single Pixel Filters
 Image Histograms
 Transfer Curves
 Geometrical Operations
 This lecture
 Neighbourhood Operations - Multiple Pixel Filters
 Convolutions
 Median and other statistical 'order' filters
 Basic Colour Theory
 Colour Models: RGB, CMYK, HSB, Lab/Luv, YCbCr/YUV/YIQ,