You are on page 1of 75

HANDBOOK

Of

Video

Test & Measurement

HAMLET VIDEO INTERNATIONAL LTD. 2003

1
--------------

'Everything you want to


know about
Video
Test and Measurement
but don't want to ask'

--------------

2
Content
WHY DO WE MEASURE? _____________________________________________________________________________________________5

PART 1 THE SYSTEM ________________________________________________________________________________________________5

THE BEGINNING... AUDIO ____________________________________________________________________________________________5

AUDIO DYNAMIC RANGE____________________________________________________________________________________________7

MONITORING AUDIO________________________________________________________________________________________________7

STEREO ___________________________________________________________________________________________________________9

FREQUENCY RESPONSE _____________________________________________________________________________________________9

WAVEFORM ______________________________________________________________________________________________________10

THE PICTURE AS A WAVEFORM______________________________________________________________________________________11

THE VIDEO SYSTEM________________________________________________________________________________________________12

THE TIMEBASE ____________________________________________________________________________________________________14

THE VIDEO WAVEFORM ____________________________________________________________________________________________14

WORKING WITH WAVEFORMS_______________________________________________________________________________________15

MEASURING SIGNALS______________________________________________________________________________________________15

THE WAVEFORM MONITOR _________________________________________________________________________________________15

MEASURING VIDEO ________________________________________________________________________________________________16

BLACK LEVEL ____________________________________________________________________________________________________18

WHAT THE WAVEFORM MONITOR SHOWS ____________________________________________________________________________19

TIMING AND SYNCHRONISATION ____________________________________________________________________________________20

GOOD WORKING PRACTICE _________________________________________________________________________________________21

THREE COLOUR VIDEO _____________________________________________________________________________________________21

COLOUR BARS ____________________________________________________________________________________________________24

COMPOSITE VIDEO ________________________________________________________________________________________________25

VECTORS _________________________________________________________________________________________________________26

THE COLOUR BURST _______________________________________________________________________________________________28

TV STANDARDS ___________________________________________________________________________________________________29

THE VECTORSCOPE ________________________________________________________________________________________________30

COLOUR TEMPERATURE ___________________________________________________________________________________________31

COLORIMETRY ____________________________________________________________________________________________________32

COLOUR GAMUT __________________________________________________________________________________________________33

WHAT IS DIGITAL VIDEO? __________________________________________________________________________________________33

PARALLEL v SERIAL _______________________________________________________________________________________________34

SERIAL DIGITAL BASICS ____________________________________________________________________________________________35

EMBEDDED AUDIO OVERVIEW ______________________________________________________________________________________36

WHY GO DIGITAL? ________________________________________________________________________________________________36

SQUEEZING DATA ONTO THE HIGHWAY______________________________________________________________________________36

JPEG AND MPEG __________________________________________________________________________________________________37

MEASUREMENT OF DIGITAL VIDEO __________________________________________________________________________________38

CONNECTING EVERYTHING TOGETHER ______________________________________________________________________________39

TERMINATIONS ___________________________________________________________________________________________________39

LOOPING THROUGH _______________________________________________________________________________________________40

SOCKETS AND TERMINATIONS ______________________________________________________________________________________42

WORKING WITH CAMERAS _________________________________________________________________________________________43

THE PICTURE _____________________________________________________________________________________________________44

COLOUR BALANCE ________________________________________________________________________________________________45

3
SENSITIVITY AND EXPOSURE _______________________________________________________________________________________46

LIGHT CONTRAST RANGE___________________________________________________________________________________________46

EXTENDING THE CONTRAST RANGE _________________________________________________________________________________47

CAMERA SHUTTER_________________________________________________________________________________________________47

GAMMA __________________________________________________________________________________________________________47

COLORIMETRY ____________________________________________________________________________________________________48

DETAIL __________________________________________________________________________________________________________48

A FEW HINTS______________________________________________________________________________________________________48

THE DIGITAL CAMERA _____________________________________________________________________________________________48

SIGNAL LEVELS___________________________________________________________________________________________________49

MORE THAN ONE CAMERA... STUDIO TIMING __________________________________________________________________________49

COMPONENT COLOUR TIMING ______________________________________________________________________________________51

COMPOSITE COLOUR TIMING _______________________________________________________________________________________52

TIMING PROCEDURE _______________________________________________________________________________________________52

THE SYNCHRONISER_______________________________________________________________________________________________53

TIMING A REMOTE SOURCE _________________________________________________________________________________________55

INLAYING THE GRAPHIC ___________________________________________________________________________________________55

THE GREYSCALE __________________________________________________________________________________________________57

THE DIGITAL STUDIO ______________________________________________________________________________________________58

VIDEO TAPE ______________________________________________________________________________________________________58

GENERATION LOSS AND DROP OUT__________________________________________________________________________________59

DIGITAL ERROR DETECTION OVERVIEW ______________________________________________________________________________60

LEVELS AND TIMING_______________________________________________________________________________________________60

SC-H PHASE ______________________________________________________________________________________________________62

AUDIO ___________________________________________________________________________________________________________63

THE FINAL CHECK _________________________________________________________________________________________________63

WORKING WITH PICTURES__________________________________________________________________________________________63

USING THE PICTURE MONITOR ______________________________________________________________________________________63

SETTING UP THE PICTURE __________________________________________________________________________________________64

SETTING THE MONITOR WITH PLUGE_________________________________________________________________________________64

SETTING COLOUR BALANCE ________________________________________________________________________________________64

ENGINEERING WITH PICTURES______________________________________________________________________________________65

WORKING WITH TEST SIGNALS______________________________________________________________________________________67

WHAT IS DISTORTION? _____________________________________________________________________________________________67

PULSE AND BAR __________________________________________________________________________________________________67

RAMP ____________________________________________________________________________________________________________67

BOWTIE __________________________________________________________________________________________________________69

OTHER TEST SIGNALS ______________________________________________________________________________________________69

A FEW TIPS _______________________________________________________________________________________________________70

PART 4 STANDARDS _______________________________________________________________________________________________72

GAMMA __________________________________________________________________________________________________________72

TV STANDARDS ___________________________________________________________________________________________________72

4
WHY DO WE MEASURE?
The requirement to measure is fundamental to television and video. There are a number of reasons. In the first case, the
medium is full of variables; one has only to look under the cover, or down the menu of a typical piece of video or audio kit to
see how many set-ups there are. The long and devious signal paths demand continuous and accurate monitoring if
pictures and sound are to be carried reliably and accurately. This requires measurement against international standards
which apply to all aspects of the broadcasting and video system.
These standards have to be maintained at every point of the signal chain. Fundamental to this is that the signal
must be generated accurately and to the correct standard, and that accurate monitoring and measurement is provided
along the transmission path to ensure the integrity of the signal is maintained.
The video system has limits within which the signals must lie and these are determined by two parameters. Firstly there is
the minimum signal that can be sent without being obscured by the system operating 'noise', an ever-present phenomenon
where electronic processing and transmission take place. Secondly, all systems impose a maximum limit to the magnitude of
the signal, a condition often known as 'headroom'.
These two limits can be likened to a highway. There is the road on which the signal rides, and bridges under
which it must pass. Should the signal not ride smoothly, or it strikes an obstruction, or a bridge, then it will be damaged. If it
is restricted by other traffic it will be slowed. In picture terms these limits translate into the brightest white and the blackest
black that the system is able to reproduce, and the amount of information sent in a given time. In audio terms, the maximum
is the loudest sound, the minimum is the delicacy of scene silence, or atmosphere.
Maintaining the signal within these limits is achieved by good design and correct operation. Accurate, consistent
and reliable measurement is the basis of video engineering. Regular checks of signal parameters are essential. For
example, the average television signal passes through many paths and processes from studio to home. At any point a
faulty component or set-up error may alter this signal. A little here, a little there, an accumulation of distortions may make the
signal unusable. It is only by strict monitoring procedures that such errors are seen and corrected before a fault situation
arises.
Nor is the simple video chain of camera to recording medium any less strict, the signal is still subject to many
processes. With video tape, the requirement to record the signal as efficiently as possible, and then to extract it once more,
is a demanding one. Finally, the signal may be subjected to the very strict international standards of a programme chain that
its originator is unaware of.
Time is the other significant element that must be considered. Video is a time based system where accurate
synchronisation of pictures is central to its function, and measurement of time parameters are an every day requirement.
Signals require a specific time to travel a specific distance, the process of recording process is time sensitive. Both place
very considerable demands on accurate time measurement.
Audio has now become part of the video system for the two are often linked together and become inextricably
intertwined in transmission routes and recording mediums.
It is the ability to measure and to understand what these measurements mean that is the principle building block
of reliable and consistent video and audio broadcast systems.

PART 1 THE SYSTEM

A SOUND WAVE IS CONSTANTLY VARYING PRESSURE...

THE BEGINNING... AUDIO


Sound travels from source to destination in serial form as variations of air pressure. Whether the steady tone
of a flute or the crash of an impact, the process is the same. Audio is the term adopted by the industry for sound
converted to an electric signal.

5
programme amplifier
with programme meter
and bargraph display

Fig. 1.1

A practical audio system places limits on the signal that may be sent, or transmitted. The maximum is that
permitted by the system without overload, the minimum is the quietest sound discernible above system, or background
noise. The loudness of sound reproduction is limited by the power available in the loudspeaker.
Measurement of sound requires that the ear's response is emulated in terms of loudness and frequency
response. The ear has an extremely wide dynamic range, or loudness scale, of about 100,000 : 1. The ear, with its
sensitivity and huge dynamic range, expects the reproduced sound to be as 'life-like' as possible and to faithfully reproduce
those sounds near to silence.
So wide a range can only be measured with a non-linear scale; some means has to be found to compress the
range down to make practical measurement possible. Fortunately, the ear has a response that closely replicates a
logarithmic characteristic, and the standard unit for sound signal level, the decibel, or dB, is based on a log scale. A
logarithmic ratio has no absolute value, whilst the signal has power which is an absolute unit. Sound levels are therefore,
referred to a standard power level defined as 1 milliwatt in a resistive load of 600 ohms. Fig 1.1 shows signal level
metering after a microphone amplifier, and before transmission.
The circuit of Fig 1.2 is of a signal source driving a current to a destination, across which a meter reads the
resulting signal voltage.

signal current

600 ohm

600 ohm
line, or cable
load
signal meter
generator

SOURCE DESTINATION

Fig 1.2

One milliwatt in 600 ohms is equivalent to 0.775 volts. The meter, however, may be calibrated in whatever form
is required. The illustration uses a signal generator of standard tone test signal; a sine wave of 440Hz. Actual sound is
rarely constant in level and the meter must therefore be able to follow these variations if the measurement of audio peaks is
to be reliable.

6
AUDIO DYNAMIC RANGE
The audio reference level in decibels, equivalent to 0.775 volt, has been fixed at 0dB. For a dynamic range of
100,000 : 1, the decibel scale will be:

Linear Scale 7.75 0.775 0.0775 0.00775 0.000775 0.0000775


volt
Log Scale +20 0 −20 −40 −60 −80
dB

The scale is typical of the range available to audio systems. As can be seen the evenly spaced numerical scale
in decibels is well suited to a visual display. Decibels and signal voltage are related by:

Decibels (dB) = 20 × Log Output Signal Voltage


Input Signal Voltage

This is the ratio of signals expressed in dBs. If the change in sound signal level is a factor of 2 then this is
equivalent to 6dB. The ear perceives this as a significant change whereas a change of 3dB is just discernible. A change
of 3dB is equivalent to doubling or halving the signal power. The distinction between decibels related to voltage (or current)
and power is important, but in audio level metering, decibels are usually taken as voltage derived.

MONITORING AUDIO
Measurement is done with a visual display. The two common types are VU meter and PPM (peak programme
meter). The description indicates the difference between the two. VU (volume units) metering is biased towards average
level, or volume. The PPM places more emphasis on the peaks of signal. Both have their roots in the early days of audio,
each designed for a specific purpose. Volume units relate to sound volume; loud programmes (or not so loud) could be
given a VU rating. Peak metering is aimed at ensuring maximum limits were not exceeded, in particular, to avoid damage to
transmitters. Today, these origins have become blurred; it is now often no more than a matter of personal choice as to
which one to use.
The PPM has become an industry standard, particularly amongst broadcasters in the UK. The VU will be found
in studio monitoring and on recording equipment where it will be used for line up with the standard tone test signal.
The two methods were developed in the days of mechanical metering. For a programme signal that was
constantly varying the mechanical inertia of the pointer was a very important factor. Both meters are specified with attack
and decay characteristics, or meter ballistics as it is called, making it possible for operators to observe programme
averages and peaks under standard conditions.
Mechanical systems are now often replaced by electronic displays. These do not present the same limitations
as mechanical ones, but it is desirable that these should exhibit similar characteristics if operational consistency is to be
maintained. Electronic meters will therefore have instant attack, with a decay that follows a standard form.

7
dB
-3 0
-10 -6
3 4 56 +3
12
-20 100%
50%
7
0%

PPM
VU

FIG. 1.3

Fig. 1.3 shows the two meter scales. Both aim to be logarithmic and show proportional decibel spacing but
practical designs vary in how well this is achieved. The indication shown is of 'zero level', that is the standard reference
signal level used in the UK. This corresponds to 0.775 volts, or 1 milliwatt in 600 ohm load, and always appears a 4 on the
PPM, or at -3 on the VU, although the latter may be used somewhat differently as described below.

dB
+3
7
4dB
6
4dB 0 100%
5
4dB 0dB reference line up level 75%
spacing 4 -3
4dB
3 -6 50%
4dB
2 -10 25%
6dB
1 -20
0%

PPM VU
FIG. 1.4

Fig. 1.4 shows how electronic bar graphs appear, although the traditional black PPM scale with white markings
has largely been replaced by LED's and others methods of indication. Zero level line up is at 4 on the PPM, with 7 indicating
the +12dB point. In broadcast terms, this would be the maximum transmitter modulation permitted but a further mark 4dBs
above this is the absolute. As much material is now recorded, the maximum signal permitted is that of the tape before
magnetic saturation.
The VU meter differs from the PPM in that it may have a sensitivity, or gain, control to alter its calibration. This
may appear as a contradiction to the principle of a standard measuring device but the standard is derived from the line up
tone. Tone is fixed at zero level; 1 mW in 600 ohm. By setting the VU meter to a specified line up, the programme average
may be adjusted to deviate about the chosen level for a particular operation. In the case of a recording machine, the VU
may be set up for a specific tape depth of modulation. The percentage scale is referred to the maximum transmitter
modulation, which is 100%. Above 100% the scale changes to red showing the danger of transmitter overload.
Line up references are chosen to reflect the dynamic range available to the operator. In the days of AM radio
this was limited and closely defined. FM radio and modern recording techniques offer greater dynamic range, and
reference levels may be varied to make maximum use of this.
VU metering is used in many varying situations. Not every instrument follows the same standard and it
advisable to consult manufacturer's recommendations about the requirements of particular equipment.

8
STEREO
Stereo is carried by two identical audio channels. Where the signal in these is identical the result to the ear will
be an acoustic addition midway between the two loudspeakers. If the signal is to be reproduced as a mono signal, i.e. a
using a single loudspeaker, then the addition will take place before the loudspeaker. Two identical signals add to make a
resultant of twice the power, i.e. 3dB higher than the originals. As stated earlier, 3dB is not significant to the ear. It is,
however, significant to the transmitter or power amplifier for it represents a doubling of signal power. A fact that must be
taken into account wherever left and right channels are likely to be added together.
It is of particular advantage in stereo working to know the phase relationship between left and right channels
for they are created from a coherent acoustic pattern. Stereo reproduction must reconstruct the pattern, and the phase
relationship between the two channels is significant in this. Stereophonic phase is where a sound wave impinges on the
two ears at differing times. The brain interprets this as a shift in the position of the sound source. Stereo audio
reproduction relays this in two separate audio channels. Where the two signals are identical and arrive at the same time.
i.e. in phase, then the ear will interpret the sound position as dead ahead.
Where these signals are added for the mono listener on a single channel and loudspeaker, the result will be one
signal of twice the power. If signals are identical but of opposite phase they will cancel. These may be said to be the limits
of stereo reproduction, and between are a continuous range of variations of balance and phase to suite all types of
programme. For the listener in mono, and the recording and transmission systems, the phase relationships may be very
significant.
Stereo phase is not revealed by standard PPM or VU metering and to provide this information the polar, or
lissajou, display has been introduced. Fig. 1.5 shows how tone will appear on a polar display, giving indication of both
phase and amplitude.

L only R only

L + R in phase L + R out of phase


FIG. 1.5

Signals in phase appear vertically, whilst those out of phase are horizontal. Sound is a complex mixture of in
phase and out of phase elements, producing a complex display. It is however, quite possible to form a very quick
impression of how these separate elements contribute to the final result.
The one-channel, or mono listener must always be considered. If a stereo sound balance gives undue emphasis
to out of phase components, when these are added together and applied to a single loudspeaker, the result will be zero, or
no sound.

FREQUENCY RESPONSE
The ear dictates the frequency range allocated to audio engineering. A general acceptance of 20 to 20,000Hz is
ample. Variations occur, with a slightly reduced range of 40 to 15,000Hz which is adequate for good domestic sound
quality. The greater the range, the more expensive the circuit, and vice versa. Telephone quality... perfectly adequate for
speech transmission, is typically 300 to 3400Hz. Measurement of frequency, as an operational requirement, has less
significance than signal level, and is much more of an engineering procedure.

9
Fig 1.6

Fig 1.6 shows a typical Hamlet display of a bar graph of left and right audio in PPM form. The benefit of the
Hamlet In Vision method over conventional metering is that scaling changes between PPM and VU are available at the flick
of switch. With the increasing use in digital video transmission with embedded audio (audio digitally combined into the video
signal), a metering display common to both, and featuring all the variants, is a distinct advantage.

WAVEFORM
Waveform is the representation of signal in graphical form with amplitude shown vertically and time horizontally.
The serial signal of audio is easily represented in waveform, a typical sound may appear as in Fig. 2.1.

air
pressure

time

Fig 2.1

In Fig 2.1, the waveform of a bell-like sound, and putting axes and units into place, as in Fig 2.2., we can see
how the wave starts at zero, rises steeply, almost to level 4, falls back through zero, eventually to subside after a series
of oscillations to zero once more.

10
4
3
2
1
0
signal
level -1
-2
time
Fig 2.2

The units of level in Fig 2.2 are arbitrary. Using a waveform display as simple as the one shown would not give
sufficient resolution of dynamic range, the scale is linear and would not represent a useful range of sounds. Frequency is
clearly shown by waveform... the oscillatory decay would have a measurable frequency. The audio waveform, therefore,
has little value operationally. It does, however, illustrate the principal of the waveform, and serves as a useful introduction
to the standard measurement of video

THE PICTURE AS A WAVEFORM


Whereas audio has a dynamic range of 100,000 : 1, practical picture reproduction is limited to around 100 : 1, and
this figure falls to 30 : 1 for video.
The eye function has similarity to the ear... and marked dis-similarity. It has a logarithmic characteristic that allows
it to operate over a wide dynamic range, from dark to bright conditions. Where it differs is that a picture has no serial
structure. The eye can move and linger where it will over the scene it sees. To transmit a picture in signal form requires
that it be disassembled and ordered into a serial form.
Breaking the picture into the smallest convenient elements, or pixels, allows each pixel value to be sent as a
discrete signal. Theoretically it is possible to send each one as a separate signal, but a typical modern video camera uses
about half a million pixels, and that number of circuits is quite out of the question. The image is captured on a silicon CCD
(charge coupled device), building up an electrical charge pattern representative of the focused scene. When the picture is
converted to an electric form, electronic processing takes that image and converts it to the serial form required for practical
transmission.

CCD image
sensor

Fig 2.3

11
In Fig 2.3 the dancer is silhouetted against a light washed background. The pixel array of the sensor has an
electric signal pattern that corresponds to the black figure, a higher level for the background, and white where the light
catches the dancer's hair. The sensor processor reads off the pattern as lines, a continuous stream which form the video
signal as in Fig. 2.4. Note how the lines become joined one after the other. Unlike the one shown in Fig 2.3, a practical
sensor will have over 700 pixels per line.

hair

white background

figure
black

end of one line period start of


previous line next line

Fig 2.4

With the advent of digital techniques, the pixel based concept has come into common usage. A serial signal is
produced by reading individual pixel values one by one, and the conventional method is to consider the resulting sequence
as having a horizontal line-based structure. This has become the principle standard in television. The standard and all its
detail, is essential if the image is to be reproduced at some distant destination, for the receiving apparatus must use that
standard if it is to reconstruct the picture. Now, unlike the random behaviour of the eye, the standard has imposed an order
to picture transmission.
From the earliest days, the television picture was conceived as a line structure, each one pixel high, building up
into a complete frame. Sending the signal pixel by pixel, line by line, produces an ordered sequence; the serial signal
required by the single circuit. The result, however, is a far more complex signal than audio, with far more information to be
sent as each pixel is addressed, or read, 25 times per second (30 times in the American television system).

THE VIDEO SYSTEM


To see how a standard is created, let us look at the basic picture requirements. Realistic motion requires 25
pictures, or frames, every second (fps), which is comparable to the 24 fps movie film standard. This picture rate was
decided at the commencement of television in the 1930s (America chose 30 fps for technical reasons regarding its national
power system).
The two video standards most widely used are the European PAL and the American/ Japanese NTSCsystems.
Both are similar in the technology they use, which determines that both have similar picture quality. For reasons of clarity,
the PAL system will form the subject of the discussion, but where relevant the NTSC system will also be included.
The PAL system uses 625 horizontal lines horizontal scanning was the method chosen by early television
engineers. The image is read off left to right, top to bottom. About 600 lines are available to the actual image, equivalent to a
vertical pixel count of 600. This figure gives acceptable resolution on an average TV screen viewed at a distance of five
times screen height. The picture rate of 25 per second (in PAL), unfortunately gives rise to an annoying flicker in average
domestic viewing conditions. Motion perception relies on the eye's persistence of vision to obscure the constant picture
update. However, 25 fps and 30 fps are both on the border line of perception, and as brightness increases the problem
worsens. Film overcomes this by showing each frame twice, doubling the flicker rate from 24 to 48 times a second.
Choosing a faster rate for video demands extra bandwidth, which is a cost penalty.
Bandwidth is signal space. The signal highway... the transmission system has a given space through which the
signal has to pass. The more space, the more signal that can be sent, meaning better quality or faster frame rate. The
bandwidth requirements for a picture of 625 lines, running at 25 frames per second, and having an aspect ratio of 4 : 3, will
work out like this:
At 25 frames per second and 625 lines per frame, there will be 15,625 lines per second. Therefore, the time taken
for each line is:

1 = 64 × 10- 6 seconds, or 64 µS
15,625

12
pixels alternately exposed

1 2 3 4 5
black white black white black

white
signal output
waveform is a
sine wave
black
period

Fig 2.5

Fig 2.5 shows alternate pixels exposed to light as would happen in the finest scene detail, e.g. around the
dancer's hair of Fig 2.3. Pixel numbers 1, 3, and 5 are fully exposed, 2 and 4 are not. Reading these will produce a signal
of a frequency with a period equal to twice the pixel width. This figure represents the maximum resolution of the sensor.
Allowing a practical sensor to have about 750 active pixels horizontally, the signal output will be equivalent to 750
÷ 2 = 350 cycles/ line. For an active line period of 54µS (line period minus the blanked portion between lines) this will
equate to a maximum frequency of 6.5MHz. This is the bandwidth required.
Doubling the frame to 50 per second will double the bandwidth required. Using 50fps and halving the number of
lines will reduce flicker but halve the resolution... or rather the vertical resolution.
The principle reason for reducing bandwidth is cost. A large bandwidth also increases complexity and imposes
design restraints thereby causing additional cost penalties. Early pioneers of television dealt with the flicker of 25 pictures
per second another way. They came up with a neat answer that is now an integral feature of our modern video standard.
It is called interlace scanning. Interlace reduces flicker, for the same bandwidth requirement, with only a slight quality
degradation; a particularly worthwhile feature for early TV. Interlace makes use of the manipulative capability of
electronics, by first breaking every frame into two fields. Fig. 2.6 illustrates how this operates.

start of field 2 start of field 1

end of field 2 end of field 1


Fig 2.6

13
By retaining the original frame rate of 25 per second for 625 lines we retain the same bandwidth. Because each
frame is made up of two interlaced fields of 312½ lines each, flicker is reduced.
The picture display follows the same scan pattern of the image sensor. The scanned area is slightly larger than
the actual picture we see, as the lines start and end outside the viewed area, leaving some lines unseen at the top and
bottom of the picture. Fig 2.6 shows each line tracing a horizontal path from left to right with a slight downward
component, then to fly back to commence the next line. The fields are numbered 1 and 2 to identify them, field 1 starts with
a half line, field 2 ends with a half line. On completing a field, the scan returns to the top of the picture to start the next. The
process repeats every two fields, making one frame.
Scanning the picture in this way produces a serial signal that can be sent down a single wire circuit. At any
instant in time, the signal has one value only, that of the brightness of a single pixel. We can say that the picture has been
coded into a video signal.

THE TIMEBASE
On reception of the signal, the picture display must accurately follow the original image scanning, whether this be
from camera or any other video signal generator. Both must start at exactly the same time, maintain step, and finish at the
same time. Should they get out of step the display will not be able to identify which part of the signal is which, will become
confused, and fail to reconstruct the picture. To ensure image and display remain synchronised a time marker is added to
the picture waveform to keep picture reconstruction in time.
Imagine a camera viewing the scene of the dancer. Its scanning process dismantles the picture line by line. At the
start of each line, a time marker is placed in the waveform which instructs the display to commence its scan. When the first
line is complete another marker leaves the camera, telling the display to fly back and start the next line. Then, at the bottom
of the picture at the end of the first field, a sequence of markers is sent out, informing the display to return to the top of the
screen and await the second field. All the movement of the scene exists within the picture frame. Outside that, everything
is blanked, only the steady stream of timing information runs on.
The two main timing requirements are therefore, line rate and field rate, or frame rate. The time markers are called
synchronising pulses, usually abbreviated to 'sync pulses', or simply, 'syncs'. To give some idea of the importance of sync
pulses, carry out a time keeping exercise. A quartz clock has an accuracy of one part in 100,000. The camera will use
such a clock in its scanning processor, or timebase. If the receiver used a similar clock to control its scanning process,
over a period of one hour the two clocks might drift by more than one frame of video. Yet it is not the shift of time that is
significant. The two scans will quickly conflict with each other and within minutes, the whole system would collapse.
Timing is therefore crucial to television and the scanning, or timebase, clock in a modern television receiver needs
constant timing update to maintain the perfect synchronism required for good quality pictures.

THE VIDEO WAVEFORM


The dancing figure depicted in Fig 2.3 will finally result in the waveform of Fig 2.7. The sync pulses are added
between the lines and signal levels are shown allocated to black and white.

line sync
period
picture period
white level 1 volt

black level 0.3 volt

line sync sync level 0 volt

one line period

Fig 2.7

14
The waveform is the trace of voltage against position across the screen, from left to right... which is time. At the
left hand side there is a portion of black, the blanked picture between lines. Here, the line sync is inserted. Picture period
and line sync period, where line fly back takes place, are the principle parts of the whole line waveform.
Signal level is conveyed by voltage. The total signal from sync to white is called 'one-volt of video'.
Note how the line sync pulses are inserted below black level. Electronics and video make possible unreal
concepts, such as blacker than black, concepts that become quite real in the world of electronics.
On reaching the end of the picture at the bottom of the screen, the scan returns to the top during the field sync
period. This is ten lines in length, which is 640 µS. The picture is therefore surrounded by a black border... line blanking
either side, and field blanking top and bottom, and one can regard this as picture framing, or a black mount. Many picture
screens are masked and do not show the picture edges, for the active size actually exceeds the physical size of the
viewable screen. Where the full scan is seen, however, blanking ensures a very clean edge to the picture.

The waveform illustration shows an idealisation; for instance, instantaneous changes of level are not possible, all
changes, however rapid, take time to execute. Also, the sync pulses are shown as having vertical edges. Bandwidth
limitations ensure that level changes fall within practical values, but in this diagrammatic style it is quite usual to show
signals as represented in this way.
The scanning structure is, of course, invisible to the viewer, only the picture elements that lie within the viewed
area and are brighter than black, will be seen. Therefore, to ensure that the correct values of black and white level are
reproduced and no artefact of the process is seen, the standards for signal levels and timing must be adhered to.
So far, there has been no mention of colour. The dancer in Fig 2.3 is black against a mid tone of grey and her hair
appears as white. The picture is black and white. But within that black and white image are tones of grey. In fact, this
illustrates that black and white are no more than the extremes of grey. These are the limits within which the picture maker
must work and any excursion beyond them will be removed, or clipped, as an automatic function of signal transmission.
Not all pictures reach these limits for not all pictures possess black or white, and so do not make full use of the
capacity allocated. The video standard allows the user to decide these values, to choose where to place scene black and
white by making available operation controls within the system. The significance of these will be discussed later.
There is also the special case where the total picture is black as in, for instance; 'fade to black'. Black does not
imply a lack of signal. Sync pulses must continue, and so must 'black' itself. These elements are fundamental to maintain
synchronism. At black and below black, the video sequence must continue unbroken.

WORKING WITH WAVEFORMS


MEASURING SIGNALS
All transmission systems, whatever the medium or method used, have maximum signal limits. Where signals
exceed this maximum, distortion of the signal and the information it carries, will occur. The form and degree of distortion will
depend on the system and how resilient it is. The most common distortion is caused by the signal exceeding the limits as
defined by the standard.
Time is also subject to distortion. Signals take specific times to pass through systems and where these time
delays, as they are called, are altered, timing errors will arise.
Another implication of sending non standard signals arises where a transmission path is shared. A non compliant
signal may compromise the safe passage of other signals.
Nor is it only where signals that are too large. Too small a signal, whilst not suffering the distortions or difficulties
described above, is itself compromised by occupying too small a space. Every system, whether processing, storage, or
transmission, is likely to add artefacts to the signal. The term 'noise' is often applied. A term that covers spurious signals
that are not part of the original, but appear nonetheless at the destination. Should the signal be so small as to be confused
with noise, it will suffer degradation.
These comments apply to audio and video. Distortions are always present to some degree. The point at which
they become annoying determines the minimum quality standard that must be achieved.

THE WAVEFORM MONITOR


The basic workhorse that deals with video measurement is the Waveform Monitor, or WFM. The means to see a
waveform, as a graphical representation similar to the ones depicted here, has traditionally made use of the cathode ray
tube Over the years there has been a divergence in design but the basic principle remains the same: that of a spot of light
scanning across under the control of a timebase. Where the two differ is that the picture has varying brightness... black to
white, whilst the WFM moves the spot in a vertical motion... from black to white.
Recent innovations regarding signal processing have allowed the picture monitor to also double as a waveform
monitor. Leading this development, Hamlet have a range of instruments that offer the dual function of measurement and
picture assessment.
The process of synchronisation of picture display to picture source has already been described. The same
applies to the WFM, its trace must also be locked in time to the signal it receives. There are, however, subtle variations in
how this is done. The WFM timebaseclock uses a variable time rate, providing different time lengths over which the signal
may be observed. To ensure its clock starts at the right point, i.e., it is synchronised, requires a trigger, or sync, input to the
WFM. This trigger input may be the same signal as that which is observed, or another.

15
MEASURING VIDEO
The video signal is generated and transmitted as a voltage, and therefore requires a voltmeter to measure it.
Voltmeters are usually associated with steady-state conditions but video is not steady-state, it is a constantly varying
signal of a moving picture. It also has the two distinct elements, picture and synchronising pulses. The WFM is a graphical
display of amplitude against time. It uses two scales; amplitude is measured along the vertical axis, time is measured along
the horizontal axis. The two dimensions of video fall neatly into this format. The scale is therefore a grid, or graticule Fig 2.8
is an actual waveform, showing the time and amplitude scales. The two dimensions are often referred to in their graphical
terms: X is horizontal, and Y vertical.

Fig 2.8

The illustration in Fig 2.8 is a Hamlet waveform display. As with a ruler, measurement is done by shifting the
either graticule or waveform until the section of interest is aligned against a graticule line. The graticule is scaled in
horizontal lines spaced at tenths of a volt from 0 volt to 1.0 volt, from the bottom of line sync to white level. Horizontally, a
time scale is placed along the 0.3 volt level, calibrated in microseconds, w ith 0.2µS subdivisions.
Amplitude is measured by shifting vertically the waveform against the graticule until black level either side of the
sync pulse is coincident with the 0.3V line. The sync level should be at 0V. Picture levels may now be measured. The
silhouetted figure is black, or near black. The grey background is not a constant value, the result of uneven lighting, and
shows as sloping away from the brighter centre behind the figure. The strong backlighting makes the dancer's hair reach
up to, and if very bright may exceed, white level.
The waveform used for this analysis is of one line only. The WFM may display any line, or all lines together. The
latter is the most common form of display as it shows signal levels over the whole picture.
The sync period may also be measured along the timescale at 0.3V. Sync pulses are 5µS wide. The line sync
period, that is the time between the end of picture on one line and the start of picture on the next, is about 10µS.
The scale of measurement is adjustable by changing the amount displayed. Fig 2.9 shows two lines. The
graticule scales remains the same, it is the observation time that is changed. In this case it has doubled. The line period can
now be measured between the sync pulses on two lines. On this scale it's 64µS.

Fig 2.9

16
sync period, or
line blanking period
picture period
1 volt
back porch
front porch

black level

sync level
line sync
leading edge
line period

Fig 2.10

Fig 2.10 sets out the standard terms for various parts of the waveform. The line blanking period contains the line
sync pulse, with backand front porches. The porches also represent video black level that separates the sync period from
picture period. The line period may be measured between any two similar points, but it is convention to measure from
leading edge to leading edge of line sync. Blanking periods, and therefore picture periods, may be less rigidly adhered to
than other timings.
All the waveform examples described thus far, relate to the line, that is, across the picture. An alternative is to
observe vertically, by field or frame, from top to bottom of the picture. The difference between the waveforms derived from
the horizontal or vertical is one of time. The first is viewed on a 64µS line period scale, the second uses a 20mS field period
scale, or 40mS frame period scale. The latter corresponding to the 25 frames per second of PAL video.
The field waveform consists of all lines making one field shown sequentially. Fig 2.11 shows the 312½ lines of
one PAL field, they are so close together that their individual identity is lost. This is a sequential display of lines, from top to
bottom of the picture. Line syncs appear as a continuous line at 0V or sync level, but it is possible to see the field sync
pulse as a thickening of the trace where the two fields join. The time scale is 1/50 second, or 20mS.

Fig 2.11

17
Fig 2.12

Over two fields, or one frame the display will be as in Fig 2.12. The gap between them is the field sync period,
the blanked portion at top and bottom of the screen.

BLACK LEVEL
We have seen how the video signal is made up of two distinct parts; the picture and the technical. The technical
part is the sync information lying below black level. Black level not only provides separation between the two, but offers a
picture black reference. It is from black that the picture is built up. Before the photographer casts light upon the scene,
there is black. Black is a crucial element of picture construction, not that this means all pictures will necessarily possess
black, nonetheless, apportioning a precise value for black is essential.
For a picture to be reproduced accurately, it must have its tonal values defined. It is insufficient to state these as
relative values. i.e. that the background is brighter than the dancer. The level of brightness of the background must be
converted into a signal value and, on arrival at the picture display, has then to be reproduced at the correct brightness. To
do this every picture element is referred to black.
Because cables are prone to acquiring spurious signals that interfere with the transmitted signal, it becomes
impracticable to assign a specific voltage value for accurate reproduction of black. However, the absolute value of black
as a voltage value need never be stated if part of the waveform is electronically defined as black. The picture display can
be designed to seek out this part of the waveform and to use this as a reference on which to build the picture. This
reference black is defined in the waveform as back porch. It is a simple procedure for the picture display to locate back
porch and construct the picture from that, so placing all scenic tones at their correct values.
The value of black is very precise. Quite small variations up or down can have very considerable impact on the
picture reproduction. For instance, an inadvertent shift of picture black by only a few percent is far more critical than a
similar change at white level. The value of picture black must therefore be retained accurately. Despite black not being
present in every picture, this in no way invalidates the argument. In the same way, when the picture is reduced to black, as
in a fade-to-black, the video signal is not extinguished. The essential elements of time and black are always present in the
sync period.
As long as peak white is at +0.7 volts and sync level at −0.3 volts, with respect to the reference black level, the
signal will comply with the standard. On generation, the signal must be given actual values, the standard states that back
porch must be at zero volts. Practical circuits, whether transmission or recording, may cause this to change. Over quite
short cable distances, a hundred meters, or less, interference from other apparatus may affect the signal, the most
common form being that induced by power circuits. Picture displays, and other processing equipment must have, therefore,
the means to recover the reference black.
This is the process known as dc restoration and it is carried out every line in the back porch, just before start of
picture. The term 'dc restore' means to restore the direct current component of the video signal. Video is a complex signal
of alternating and direct current components. Dc restoration has become standard practice in signal processing and
transmission to reinstate the direct component.

18
WHAT THE WAVEFORM MONITOR SHOWS
We have now put in place everything required to make measurement possible. There is a voltmeter in the form of
the WFM. The parameters of the signal set down; timing and synchronisation, black level is established as the picture
reference, and the levels of white and sync determined.
It is usual to state 'one volt of video' as the amplitude of a standard test signal, actual pictures do not always
possess peak white. One volt measures from sync level to peak white, whilst measuring from black level is generally
carried out for picture-only information. Thus there is a distinction between measuring for test or engineering purposes and
operational requirements. Whichever is used, we need the means to position, or shift the waveform against the graticule.
The accepted nomenclature of waveform display harks back to its graphical principles, and the axes of X and Y.
Unfortunately, as with so many developing technologies, video and its terminology have been overtaken. We often find the
same symbols and terms applied to quite different parts of the system. Beware! As we proceed with our analysis of video,
confusion will arise over the use of such terms. As far as is possible these will be pointed out wherever they arise.
Because the term 'Y' will appear later in quite a different guise, care must be exercised when it used to indicate the
vertical graphical axis of the WFM. The term 'X' is more fortunate, its use to indicate the graphical horizontal is less
problematical.
Note, however, the terms 'horizontal' and 'vertical' as regards the WFM axes must not be confused with 'picture
vertical' and 'picture horizontal'.
So, returning to our waveform displayed against the graticule: the means to shift in respect to each other is
carried out by the WFM's controls of X and Y, normally called 'X shift' and 'Y shift'.
The WFM was created primarily as an engineering device, in fact in the early days, television operations were
manned by engineers. They have left behind a legacy. The calibration of the WFM is still essentially an engineering one. For
instance, when the term 'percentage' is used, we must specify the datum of measurement. Is it sync level or black level? If
you are a photographer, it will be from black, but an engineer has an interest in the signal overall. A WFM may have a Y
scale based on 0 to 100, or if voltage is preferred, 0 to 1V, and that will be from sync level. Picture makers are concerned
with the scale from black to white, they would prefer black to be at zero, and a scale in percentage that places white at
100 %. The following table sets out the relationships between these options.

ENGINEERING OPERATIONAL
Peak White 1.0V 100% 0.7V 100%
Black Level 0.3V 30% 0v 0%
Sync Level 0V 0% -- --

The option to swap between engineering and operational scales is offered on some Hamlet products. But here,
most references to voltage will be for engineering purposes and will be measured from sync level. Pictures will always be
from black and usually in percentage. To be aware of the difference is very important, when measurements are quoted the
reference should be quoted also.
Terminology will continue to cause confusion, if for no other reason than its sheer diversity. Many terms are mere
abbreviations to make equipment control and panel layouts simpler. The timebase is switchable between different time
values, the main ones are line and field, these are usually abbreviated to H and V. i.e. horizontal and vertical. Further, 1H is
one line, whilst 2H is a display of two consecutive lines, but this does not quite explain what one is actually seeing. Using
the normal 1H mode, the WFM shows every line, resulting in a display of all lines superimposed. The waveform in Fig 2.8 is
of one line only. To display a single line requires the WFM to count lines until the chosen one appears. To do this some
instruments have an X delay control. The delay time is set as required, usually as a countdown of lines, until the
appropriate one appears. A readout of which line, by number, may also be shown. It may also be possible to send a marker
to the associated picture display to show where that line lies in the picture; a useful feature when analysing elements of
the picture.
The 1H display of all lines reveals the unseen lines of the field period, some of them in fly back revealed by the
continuous line at black level, and may be used to adjust X shift to place black level exactly as required. Note also what
seems to be a 'ghost line' at sync level. This is the sync pulse sequence associated with the field sync containing line sync
pulses of much greater width, usually referred to as the broad pulses.
The term 'blanking', used to denote the unseen lines of line and field periods, is a term dating from the days when
cameras used tubes with indifferent black performance. Blanking and black are not synonymous. Black means reference
black level. Blanking merely means that region outside the picture area where sync pulses are carried. It is important to
realise the difference between the two for the terms can sometimes get confused.
Another 'name-confusion' may have been spotted. The term for 'voltage' is 'V', the same as used for 'V', the field
period. The only clue as to which is which is that 2V cannot be the voltage of a video signal...! However, 1V can. Only
practice and experience will avoid problems here. Whilst the sense of most terms will usually be apparent, the newcomer
must take care. If in doubt spell out what you mean.
Other shared usages of terminology will appear as the system is further investigated. Sometimes alternative
terms are introduced to offset the problem, but often these compound the problem as both expressions fall into common
usage.
Magnification of the display is very similar to using an optical magnifying glass. It is used to examine more closely
specific features of the waveform. The display may be magnified, or expanded, along either X and Y axes, the shift
controls move the expanded waveform until the desired part is placed in the graticule. Magnification rates of 5 times to 25

19
times are common. As only a portion of the waveform now occupies the screen the image is theoretically less bright, a
situation rectified by increasing the display brightness.

Brightness of the display is an operation control, made necessary by the different ambient lighting conditions
encountered in practice. To compensate for brightness variation brought about by changing the time scale and
magnification, an automatic system is usually incorporated into the instrument. It is only possible to increase brightness to
the maximum permitted by the screen. Attempting to go beyond this point is liable to distort the display and make it less
sharp.

TIMING AND SYNCHRONISATION


Because video is a time based system, with the information broken down into a sequence of lines and fields,
accuracy of timing is crucial. Earlier we saw that quartz accuracy was inadequate, only a time locked, or synchronised
system, has the required precision.
Timing standards must be as rigorously adhered to as amplitude values. For example, two cameras linked to a
studio video mixer, or switcher, must produce signals in synchronism at the mixing point. If not then, as the two pictures
are mixed, cut, or inlaid together, the output will consist of confusing sync information, and the picture display will be
unable to function.
There are two time criteria; the accuracy of the signal at point of generation, and timing synchronismbetw een
signal sources. The WFM deals with both of these but in different ways. All timings in the waveform; sync periods, picture
periods, etc. are defined by the standard and may be measured against the accuracy of the WFM timebase. When
synchronism is to be measured we must decide an absolute. Time is continuous. A picture starts at some random point in
time, but arrives at a very predictable point in time afterwards, determined by how far it has travelled. The point at which
we measure is the timing reference. Where there is convergence of picture sources into a final video programme that
becomes the timing reference.
Timing provides the WFM with a large part of its duties. Signals take a finite time to travel from one point to
another. A signal travels along a cable at almost the speed of light, which may be considered fast enough to be
instantaneous and therefore ignored. For video this is not so.
An interesting exercised is to put together a simple set up as in Fig. 2.13.

input A

camera 1 input B

WFM

camera 2

Fig 2.13

Two identical cameras are connected to the WFM inputs A and B. Select the WFM display to H, and view both
inputs together (use the WFM manual to find out how to do this). Decide which camera will be time-master, and select the
WFM timebase, or 'trigger', to that input. You will find that waveform will be fixed, and the other will be drifting forwards or
backwards, maybe slowly, but moving. This is because the camera timebase clocks are not locked, or synchronised. The
WFM's own timebase can be selected to either input depending which you prefer to be master (some WFMs don't provide
the option). In Fig 2.13, selecting the timebase, or trigger, to input A will make camera 1 waveform stationary. Selecting to
input B and camera 2 will be stationary.
This simple demonstration will show how rapidly the two cameras slip out of sync. Only when the waveforms
coincide are the sources in sync, and only when both are stationary are they locked. The experiment may use any type of
video source; capgens, video tape, off-air pictures. The effect will be the same. Timing is fundamental to studio and edit
systems and will be pursued at length later in the book.

20
GOOD WORKING PRACTICE
Measuring the signal requires the same care as with measuring anything else: the correct conditions and a
steady eye. A comfortable position is essential. The WFM should be placed to avoid spurious light or distracting reflections.
CRT based WFM's, including Hamlet products that use picture displays, produce light and are therefore adversely affected
by ambient light falling onto the screen. The more recent use of LCD displays reduces this problem as these are light
reflectors, unless of course, they have their own internal illumination, in which case ambient light may well require
controlling.
Fatigue is the enemy of reliable and consistent results. The correct viewing distance is important, and it should be
as close a possible to the eyeline, with the screen normal to the screen-eye axis. Where instruments are shared there will
inevitably be compromise between the users, but giving thought to layout and the need of each individual is most important.
If your workplace a good place to work you are much more likely to do good work!

THREE COLOUR VIDEO


So far we have studied only monochrome, or black and white video. The addition of colour alters the situation
radically. In the first case, the amount of colour in a picture can vary considerably, from very muted pastels to lavish
vibrancy. In principle colour demands three separate video channels.
It is an established principle that any colour may be described in terms of red, green and blue (RGB). These are
the colour primaries of additive colour; the distinction between adding coloured lights together and reflecting white light
from coloured dyes is very significant. Additive colour is that produced by the screen of a picture display, where red green
and blue in equal amounts add to make white light. The result of adding coloured paint to white paper is to reduce reflected
light and approach black.
We must also establish a few other principles before a full understanding of colour can be gained. A viewing
screen based on red, green and blue, produces no light when all colours are turned off, resulting in black. With each colour
fully on and of equal value, the result is white Between the limits of turned off and fully on, an infinite number of grey tones
and nuances of colour are achievable. The single most important feature of the system is that when the three colours are
equal value, the result is grey... remembering that black and white are only special cases of grey.
In practice, the black of one picture display may differ from another, depending on its design and the conditions
under which it is viewed. Modern screens are made to appear black by absorbing as much ambient light as possible,
thereby permitting good reproduction of picture black. Design differences cause colour differences. The same face
photographed by the same camera may appear to the eye to be slightly different when viewed on different screens. Such
is the acuity of the human eye to colour. Nor will two people see colour in exactly the same way, or the same pair of eyes
under differing conditions, or on different days. Our eyes are inconsistent and unreliable where the absolutes of vision are
concerned.
The choice of colours... which red or green or blue, was dictated by the availability of phosphors for the CRT's
used in early colour television. That standard still applies but there has been steady improvement in this field over the
years. The principle of equal values of RGB producing grey has been incorporated into the video standard. Whatever the
picture source or processing applied, this standard always holds.

sensors
red
lens

RGB colour
green outputs

blue
light is split into its
components

Fig 2.14

Fig 2.14 shows the a typical camera with three CCD sensors. Many low cost cameras use a single sensor with
filters but the principle remains. The light is optically split into the RGB colour components, processed and delivered as three
individual signals.

21
100%

red 0.7V
0%

100%

green 0.7V
0%

100%

blue 0.7V
0%

white 100%

0.7V luminance signal


black 0%

Fig 2.15

Returning to the dance scene used earlier, Fig 2.15 presents it as a colour signal. The figure remains black with
hair reaching white level, against the grey background, and the scene is shot in white light. The colour camera analyses in
red, green and blue, and as the illustration shows, all three colours have identical values. Imagine what the waveforms
might be if the dancer swapped her black costume for, say, a red one.
The colour television standard states that the three colours; red, green and blue, each of 0.7V when added
together, will produce 0.7 V of luminance, or white. The resulting picture will therefore be varying tones from black, through
grey, to white. This is the principle of equal values, and the electronics are designed to achieve this at the video standard
interfaces. Relating to percentage values; 100% each of RGB will produce 100% luminance.

when R=B=G= 0 the result is black


when R = B = G = 100% the result is white
when xR +xB + xG = xY where xY is the equivalent luminance or grey value.
and 50%R +50%B + 50%G = 50%Y

Considering the case of our dancer dressed in red, then neither green nor blue will be present in that part of the
waveform. Other colours are a mixture of red, green and blue in various amounts. There are also the complimentary
colours, the opposites to red, green and blue, which are cyan, magenta and yellow. These appear when one of the
primaries is not present:

R + B = Magenta green is not present.


R + G = Yellow blue is not present.
B + G = Cyan red is not present.

Practical colours... those appearing in nature, or from ink, or by coloured light, are rarely so pure as to be
exclusively a single primary or a compliment.
Colour pictures may be sent as RGB signals. However, to do so over any distance is quite inefficient. The use of
three circuits incurs three times the cost of a single black and white channel. Using RGB, each circuit carries an equal
proportion of the total picture, requiring the circuits to be identical for each signal must experience the same processing, the
same degradation, the same manipulation. If this is not adhered to the principle of equal values will be violated and colour
distortion will occur.

In practical terms, colour occupies a very small space in the signal when compared to the demands of tonal, or
grey values. By making use of specific features of how we perceive colour, it is possible to make simplifications. Our ability

22
to see fine detail, or sharpness, is less in colour than in black and white. Colour can, therefore, have a lower specification
with regard to how much detail information is sent. Lowering resolution in this way directly translates into a bandwidth
saving. Our vision is more sensitive to differences of hue than of saturation, that is, we are more aware of colour
differences than colour intensity. Another factor is that our eye's acuity peaks in the green part of the spectrum. Of the
three primaries, green most closely resembles the luminance signal.
Because of such characteristic in our perception of colour, benefits can be gained in the design of the system
and the way pictures are transmitted. The main points are:
1) The black and white, or luminance signal is so important in its own right that it separated from the colour
component. The luminance value, and fine scene detail, i.e., sharpness, is contained in this signal.
2) Having separated off the luminance signal there is now no need to have full spec colour signals.
3) The luminance component is subtracted from the red and blue signals lessening the demands even more.
4) Because green is so similar to luminance, it can be discarded.
Now only one high spec luminance circuit is required, plus two lesser circuits for the colours. The three full spec
RGB component signals have now been replaced by:

Y the luminance signal which is RGB added together.


Gr (R – Y) red minus luminance
Gb (B – Y) blue minus luminance

The Y signal has the advantage in that it is the complete black and white signal and a usable picture is still
available should the colour signals get lost, or if a black and white picture is all that is required. The colour signals may be
specified for colour resolution only, so trading off unnecessary signal capacity. Green is recovered with a little
mathematics worked out by the electronics in the picture display. This then is the basis of component video.
The requirement for all three signals to be maintained at the correct levels still applies. The standard is normalised
as before to 0.7 volt, or 100%, of video in each case. Should they vary then colour errors, both hue and saturation, will
arise. Note, as the discussion centres on the picture, reference to syncs disappears. That is not to say that the engineering
has been forgotten; sync pulses are carried by the Y signal as they were for black and white.

23
COLOUR BARS
The representation of colour values is best seen with the standard colour test signal, colour bars. This waveform
is also useful to us in learning more about the colour system.

W Y C G M R B BL

w y c g m r b bk colour bars

100%

Red
0

100%

Green RGB colour signals

100%

Blue
0

100%

Luminance Y RGB add to produce Y normalised to


100%. This is equivalent to black &
0 white.

R-Y Subtracting Y from Red

B-Y Subtracting Y from Blue

Fig 2.16

Colour bars represent the limits of colour and luminance in video. In Fig 2.16 the method of generating colour bars
is shown. Three equal value signals, but with varying on/ off periods are added together. These form the components,
RGB, and each will have a specific luminance value. e.g. green has the highest luminance value, blue the lowest. Adding
RGB together gives Y. Subtracting Y from red and blue produces colour only signals, or R − Y and B − Y. Paradoxically,
these 'Y less' signals have very much higher amplitude than the individual RGB and Y signals from which they are derived.
That's the result of mathematics; the subtraction of a luminance value from a colour of lower value, producing a negative
signal swing where the complimentary colours occur.

24
The blue component is the greater with a peak to peak swing of almost 180% about the axis. The red has a total
swing of 140%. These values are far in excess of what the system requires for colour transmission, and if permitted,
would be very wasteful. The amplitudes of R − Y and B − Y are therefore reduced. The process known as weighting,
makes a predetermined adjustment to the colour difference values. At the picture display the opposite parameter is used to
restore to the original values to enable correct recovery of RGB.
Although weighting increases the risk of signal distortion from noise, the colour signals are less critical to the eye
and the saving of signal space far outweighs the degradation. As the eye is least sensitive to blue it has the greater
degree of weighting, it is also the larger of the colour difference signals. Its weighted value comes down to half the
original.
Colour bars display the maximum colour signals of the video system, and are known as 100% Colour Bars. The
weighted values are:

R− Y B−Y
Yellow 10 −44
Cyan −62 15
Green −52 −29
Magenta 52 29
Red 62 −15
Blue −10 44

Note how the colour difference signals swing below zero to a negative value. The colour difference signals do
not have sync pulses, only the luminance signal carries sync information.

COMPOSITE VIDEO
The separation of colour and luminance has become established in many areas of video operation. A number of
variations of component working exist, adapted to different requirements and situations. But where pictures have to be
sent any distance, the use of three circuits remains a serious disadvantage. For national TV transmission it is prohibitive.
To bring all three into one signal in one cable produces composite video.
The international colour TV standard is based on the existing black and white video system as described earlier.
Clever design made possible a circuit bandwidth similar to that already available, and the resulting standard developed in
the USA was the NTSC system. Squeezing a three-colour television signal into a single cable, through one transmitter, to be
received by one aerial, was a considerable achievement for its time. From this standard sprang the PAL variant, adapted to
European standards of distribution, with added minor improvements.
In composite video, the Y signal is the original black and white signal. The colour difference signals are modulated
onto a high frequency carrier. The resulting colour subcarrier, or CSC, or simply C, is the chrominance signal, and is added
to the luminance signal to form the composite signal, often expressed as simply YC.
Choice of subcarrier frequency was the subject of considerable research. Both systems threw up their own
preferred figures. In each case the frequency is very precisely defined; 4.43361875 MHz for PAL and 3.579545 MHz for
NTSC. The choice of frequency is determined by various factors specific to the standards concerned. Carrier signals have
been used in telecommunications for multiple speech transmission for a long time, the technique may be described as
putting information in a protective envelope, where signals are less likely to suffer distortion. The amount of colour
information in actual pictures may be quite modest but its integrity is crucial for the eye's colour sensitivity is extremely
acute. The protection afforded by the CSC technique is central to the accuracy of colour television.
The principle of encoding makes use of two subcarrier paths, one phase shifted by 90°, to modulate the two
colour difference signals onto the subcarrier. This technique is known as suppressed carrier, and has two important
benefits fundamental to composite video:
1. Colour saturation is carried by the amplitude of subcarrier, where there is no colour, the
subcarrier will be zero.
2. Hue is carried by the of the subcarrier relative to a colour phase reference.
Suppressed carrier operation means that no transmission power or signal space is wasted by sending a carrier
signal when there is no colour signal to carry. There are a few other valuable features. Hue, to which the eye is
particularly sensitive, is carried by the phase angle of the subcarrier which is easier to safeguard. Subcarrier amplitude is
more vulnerable, but as the viewer is less sensitive to saturation errors the effect is minimal.
Composite is therefore a robust signal. Ev en so, it is an extremely complex one and care must be taken in its
handling. Recovery of the luminance and colour components at the destination is critical; the decoding process has to
operate on a signal that may have travelled over a devious route, through various links, from remote transmitters to roof-top
aerials. Not to mention the vagaries of domestic receivers and video tape. Degradation may be considerable yet the
decoder must be able to unravel the signal into its components.
To understand what this means, let us look a little closer at the way the colour is carried.

25
VECTORS
Vectorial display is a convenient way to analyse the colour information carried by the phase and amplitude of the
subcarrier. A subcarrier is a sine wave, conventionally shown in graphical form as amplitude against time, as in the left
hand half of Fig 2.17. On the right is the vector display, here, the sine wave is rotating about a fixed point. i.e. time is
removed from display.

90

subcarrier 180 vector rotation 0


amplitude one cycle of 360
subcarrier
240 degrees

0 90 180 270 360degrees 270

one cycle of subcarrier


Fig 2.17

The rotating vector in Fig 2.17 represents the subcarrier: revolutions per second equate to cycles per second, or
Hz. Any point around the path of the vector may be said to represent the instantaneous voltage of the subcarrier at any
instant in time. That's what is meant by subcarrier phase. The line at 240º is where green appears on the vector display.
The vector concept is an aid to understanding phase. As phase carries colour hue, and amplitude colour
saturation, the vector display shows the complete colour parameter in a very convenient form. One rotation of 360º
contains the whole colour spectrum.
The decoder uses the same principle. It 'observes' the incoming subcarrier of the composite video, by comparing
to a the original, pure subcarrier. To do this it derives its own perfect copy of subcarrier from the incoming signal. How it
does so will become apparent shortly.

26
Fig 2.18

Fig 2.18 shows how colour bars appear in vector form. The weighted colour components are added to the
subcarrier. Note that B − Y lags R − Y by 90°, hence its axis moves to the vertical because, by convention, vectors rotate
anticlockwise. The signal colour values extend and meet at the vector co-ordinates to define hue and saturation by vector
angle and vector length. Angle is measured from the horizontal axis starting at '3 o'clock'. Thus the three primaries and their
compliments appear at the points marked.
Referring back to Fig 2.17, the green vector at 240° can now be accurately calculated:

52
tan − 1 = 610 ... that is 61° plus the 180° already travelled = 241°.
29

This illustrates the ease with which colour can be defined in vectorial terms.
The subcarrier vector phase and amplitude constantly change with each variation of colour. Should the
subcarrier move through one complete cycle, or 360º, it will return to its starting point. In doing so the vector will rotate
through the whole colour spectrum. There are (theoretically, at least) an infinite number of possible colours in one 360º
sweep, all having a specific vector value.
With the colour spectrum held in this way, the ability of the decoder to measure small phase angles is essential to
accurate colour reproduction. To get some idea of how accurate, let us work out how long 360° of CSC represents.

The subcarrier frequency of NTSC is 3.579545 MHz., therefore, the length of one cycle is:

1
= 0.2793725 µS
3. 57945 MHz

1
...and for PAL, = 0. 225549388 µS
4.43361875 MHz

27
With the complete spectrum of colour contained within one cycle of subcarrier, very small variations of timing will
introduce a large colour change. The eye's acuity is such that it will be aware of a timing shift 0.005µS, or 5 nanoseconds,
which corresponds to 5° of phase shift. For black and white pictures the timing accuracy is 0.5µS. The figure of 0.005µS is
one hundred times more precise and timing to this accuracy requires quite a different approach.
Such precise timing measurement is beyond the WFM. By considering chrominance and luminance separately the
matter becomes somewhat easier. The vector display now comes into its own; its display ignores luminance entirely,
showing colour in phase and amplitude.

THE COLOUR BURST


As stated earlier, the composite video must be decoded to recover the colour difference and luminance signals.
The decoding process requires pure subcarrier reference, not chrominance, which is subcarrier modulated with colour,
but an accurate signal free from variation (remember... chrominance will be zero in absence of colour! Like syncs, colour
reference must be independent of picture) The decoder, therefore, re-generates from the incoming composite video,
accurate subcarrier, using a colour reference attached to the signal. This is the 'colour burst'.

colour burst
inserted into
back porch

Fig 2.19

The decoder seeks out the colour burst and from it generates the pure subcarrier to which it is able to
synchronise and so recover from the chrominance, the original R - Y and B - Y signals.
The colour burst is 10 cycles of subcarrier preceding the picture on each line, as is shown in Fig 2.19. It is
present at all times, except during field sync period. It is 0.3 volts peak to peak (0.3Vpp). The sync detection circuits in
receivers, etc. are designed to ignore the colour burst when deriving the black reference. The waveform in Fig 2.19 has
no picture information and is therefore often called 'colour black', or 'black & burst', and occasionally 'black and sync' To
ensure that colour burst and colour information remain in very close relationship to each other, the burst is added to the
signal as part of the coding process.
Fig 2.20 illustrates the coding procedure.

28
B-Y chrominance

ADD

R-Y

ADD

luminance

composite

Fig 2.20

Fig 2.20 uses colour bars to demonstrate the process. The original colour difference signals are shown
overlaying the subcarrier envelope. These are added to produce chrominance, and this is then added to the luminance to
produce composite video. Where there is no colour. e.g. the white and black bars, subcarrier is zero. Note how the colour
burst appears in the R - Y and B - Y components. Compare Fig 2.20 to Fig 2.16.
Colour bars represent the maximum values permitted and so form an excellent test signal for the system. Current
practice states that 100 % Colour Bars exceed the practical requirements of normal pictures, and this, the original version
of colour bars is no longer accepted as the defined test signal for international transmission. A version based on 75%
colour saturation values is now the standard.

TV STANDARDS
Further differences now begin to emerge between the NTSC system and its European derivative, PAL.
The eye is able to resolve fine colour detail better in the red/ orange colours than in the blue/ cyan, and the
bandwidths of the colour signals are modified to take advantage of this. NTSC does this by re-aligning the colour axes. R −
Y and B − Y, as seen on the vector display, are rotated anticlockwise by 33° from the original axes to align them with
orange/ red and blue/ cyan, the latter having a lower bandwidth. These are the I and Q axes respectively. PAL does not
use I and Q modulation axes.
Although NTSC and PAL are technically incompatible, they have many common features. R − Y and B − Y are
one, excepting that when modulated onto subcarrier, the NTSC axes are shifted to I and Q. Terminology differences also
emerge. In PAL, R − Y and B − Y, are usually known as V and U respectively, and Cr and Cb have also come into common
usage.
The principle difference between PAL and NTSC is the additional protection PAL offers against the affect
[AFFECT] of phase variation. Hue is described by the phase, or vector angle which makes composite video extremely
sensitive to timing. Standard coaxial video cable has a propagation time about two-thirds the velocity of light and over a 50
metre length of cable the signal will take about 250 nS to appear at the far end. This is a complete cycle of subcarrier, or
the whole colour spectrum, and so cable length is critical where colour phase is concerned. One metre of cable has a 5º
delay. A matter requiring very serious consideration as we shall see later.
PAL was developed to offset one particular problem: that of differential phase distortion. Distortion is most likely
to occur in high levels of the signal. Therefore colours that reach the maximum, or are on high values of luminance, are
more liable to suffer degradation. Although the colour burst will provide a phase reference at the start of each line, if phase
distortion takes place after the burst it will be interpreted as colour change. PAL deals with this by adding an equal and
opposite error on alternate lines. By switching the polarity of V (the red component) every line, a phase error on one line
will appear as an opposite in the next. The result to the eye is cancellation.

The subcarrier undergoes this phase reversal every other line, which in interlaced video, is each field. The V
vector therefore swings through 180°, taking with it the V component of the colour burst to ensure the burst phase is still

29
true to each line. The picture display interlaces the lines, field by field, so that as the PAL switch operates on alternate
fields, adjacent lines have opposing hue errors and will therefore cancel. The result is desaturation but that is preferred to
hue change. This has become known as 'simple PAL'. A further refinement, sometimes known as 'delay PAL' is to cancel
the error electronically in the decoder rather than allow the eye to do so.
Phase errors also affect NTSC, but this system affords no such protection against the condition. Hue changes do
occur and can be quite unpleasant, particularly where facial rendition is concerned.

THE VECTORSCOPE
Observing phase error is clearly of considerable importance. The discussion on vectors mentioned the value of
using vector displays as a means to observe hue and saturation; a function carried out by the vectorscope.
The instrument is best described as a precision composite video decoder with a vector display instead of a
picture. The luminance signal is discarded, only the chrominance is used. There are similarities with the WFM, in fact,
because there is considerable commonality, the two are often combined into the same package. A rotary shift moves the
display around the centre of the graticule (see Fig 2.22) a magnification control enlarges the trace so that low saturation
colours may be observed. Zero colour is at the screen centre.
The vectorscope is synchronised to subcarrier, the rotating vector placing the colours as stationary points at the
colour difference co-ordinates. All greys, black and white are non-colours, have no vectorial value and therefore appear at
the centre of the display. The marked boxes of the graticule are for the primary and complimentary colours of colour bars.

Fig 2.21 Fig 2.22


Pal signal shown on a vector scope NTSC signal shown on a vector scope

Note Fig 2.22 the phase shift shown by the picture rotated also notice this is a 75% colour bar signal
on a 100% colour bar graticule

In addition to variable magnification, there is also provided 75% and 100% positions, corresponding to 75% and
100% colour bars. Altering the magnification from 75% to 100% is carried out by changing electronic gain. The colour burst
has, therefore, two marks for amplitude as the burst remains the same at 0.3Vpp, for both 75% and 100% colour bars. It
will appear larger when 75% bars are selected due to the higher display gain compared to the 100% setting.
A modern vectorscope will handle both PAL and NTSC signals, the standards are similar enough in principle to
allow this, and offer greater flexibility. The significant difference between the two is the alternating subcarrier phase of
PAL on a line-by-line basis. Fig 2.21 shows the PAL version. The 'PAL switch'... the alternating CSC phase, may be
selected on or off. Setting to 'off' decodes the signal as for NTSC. It is worthwhile experimenting with the facility. Use
colour bars, select between PAL switch 'on' and 'off', adjust 'phase' on the vectorscope, and note the effect.
Like the WFM, a vectorscope may have more than one input, allowing more than one signal to be displayed at a
time for comparison. An external CSC reference input will be provided to allow the display to be locked to an alternative
source of subcarrier (in the same way as the WFM can be locked to external sync). This can be standard composite
signal, black and burst, or pure subcarrier, it may in some cases be derived from one of the picture sources. The
vectorscope will ignore everything but the colour burst when taking its synchronising reference from a composite signal.
Note the graticules differ for the two standards; I and Q axes are used for NTSC whilst PAL has V and U axes
with the addition of the reversed polarity V signal colours marked in lower case letters.

30
Fig 2.23

The WFM is unable to display any information about colour apart from saturation which shows as the amplitude of
chrominance. Fig 2.23 shows how the chrominance of colour bars appear on the WFM as thickened envelopes for the six
colours.

DEVELOPMENTS
The vectorial display described here is limited to composite video. It is however, a valuable tool and has
applications wherever colour is used, and is not necessarily limited to composite video operations. Component video can
make use of the convenience of vector analysis where monitoring is made available in composite form.
Component video uses a waveform display as in Fig 2.24. The three components sequence one after the other.
But colour parameters... particularly hue, are much more easily assessed with a vectorscope.

Fig 2.24

COLOUR TEMPERATURE
This is the relationship of light colour to temperature. As the temperature rises above absolute zero Kelvin,
radiation begins and increases with temperature. The eye begins to respond to light radiation as an object's temperature
rises from the infra red. First appearing as dull red, then with increasing temperature, orange, yellow, and white, finally to
disappear from the visible spectrum into the ultra violet. The term 'colour temperature' is derived from this fact of physics.
Colour temperature relates to the 'redness' or 'blueness' of light. Because daylight varies in colour along the red/ blue axis
the term is easily applied.
Pure sunlight has a colour temperature value of 6000K. Time of day and weather conditions will modify this to as
low as 1000K and up to 10,000K, or higher. Such variation is readily achieved from sunrise or sunset, to that of a brilliant
blue sky. Artificial light also has a colour temperature value. For instance, tungsten light... the standard incandescent
filament lamp, has a nominal value of 3000K. it is daylight and tungsten light that have been the principle light sources of film
and television from the early days, and these are now adopted as the industry standards of 6000K and 3000K
respectively.
Other forms of lighting, particularly fluorescent or discharge lamps, will have other colour characteristics and are
not easily definable in terms of the red/ blue axis.
Colour temperature is definitive. It affects how a picture 'looks'; warmth and coldness are esoteric terms attached
to picture colour style. A picture with reddish, or warm tones, is a particular photographic style. Conversely, blue carries

31
the opposite association. The colour of light is therefore very important. matching or mixing light sources of different colour
temperatures is part of the craft of photography. For this reason video cameras, as will be seen later, have specific colour
temperature values programmed into them, as is the case with film.
It is important to remember that colour temperature is a parameter of light colour and is independent of light level.
Fig 2.25 shows how the eye sees colour.

BLUE GREEN RED sun 6000K


100%

80%

60%
tungsten light
40% 3000K
eye response
20%

0
400 500 600 700
Wavelength nanometers (nm)

Fig 2.25

The eye continually 'adjusts' itself to accommodate changes in colour and brightness. The average person won't
notice the colour change from daylight to tungsten light, but on camera the difference will be very apparent. Tungsten
lighting has the advantage of being adjustable in intensity, it does however, change colour temperature; the value of 3000K
only applies at full output. In the range from 100% to 80%, however, the colour temperature will remain largely unaltered.
Colour temperature may be adjusted by filters, either on the camera, or the light. Correction filters are available for
shifting from tungsten to daylight, and the reverse. Intermediate values may also be used for fine trimming of colour
temperature.

COLORIMETRY
The principle of additive colour as used in television is based on numerical colour values that are applied to the
primaries, red, green and blue. The unit of light is the lumen and any particular colour may be defined as:

n(C) ≡ x(R) + y(G) + z(B) where n, x, y and z are in lumens

The television cathode ray screen using phosphors produce 0.3 lumens of red, 0.59 lumens of green and 0.11
lumens of blue, to produce one lumen of white. Normalising to equal values gives:

1 unit R + 1 unit G + 1 unit B ≡ 1 unit White

This is the video standard of RGB additive colour. Conversion from 0.3R, 0.59G, 0.11B lumens to equal values of
RGB is carried out in the camera's colour matrixing system. The reverse takes place at the viewing screen. The principle of
equal values simplifies the defining of any particular colour and its luminosity in terms of RGB values.
Not every colour will in fact form a match with a mix of RGB. The colour spectrum defines every colour by
radiation wavelength, Fig 2.25 shows the basic spectrum. In terms of RGB these 'pure', or spectral colours are not possible
to define.

32
G

g = 0.7 c

c1
W

B r = 0.2 R
700nm
400nm
Fig 2.26

Fig 2.26 is the basic CIE colour triangle; the CIE is the defining organisation for colour specification. The points R,
G and B represent the primary colours. The lines joining them are the three colour axes. The line RG represents all colours
from red to green without blue, along GB are all colours without red, along BR are all colours without green. The point 'c' is
a colour defined as 0.2 r and 0.7g. All colours bound by the triangle may be so defined. The curve running round from
400nm to 700nm is the locus of spectral colours. The colour at point c1 on the spectral locus is one in the yellow/ orange
band, and is outside the RGB triangle. The colour TV system is therefore unable to reproduce this colour exactly.
Sodium light is a spectral colour and that is why some street lighting appears on TV as an approximation. A
condition arising out of the eye's ability to realise colours, such as sodium, that lie outside the triangle. What is more,
cameras by different manufacturers with differing analysis may show variations of rendition of spectral colours.
The straight line completing the locus from 400nm to 700nm are the non spectral colours; the purple range.
The point W is the white point and is at the 'centre of gravity' of the triangle. When a camera is white balanced it
forces the RGB signals to equal values, creating its own white point.

COLOUR GAMUT
Gamut is the range of colour within a specific class. The RGB gamut is large, and is the result of additive colour
from the cathode ray screen of the television receiver. Conversely the subtractive process of printing inks uses CMYK:
cyan, magenta, yellow, with black added to enhance tonal range into the darkest parts of the print. The CMYK gamut is
considerably smaller than that of RGB.
Component video makes possible the creation of colours by manipulation that lie outside the gamut of RGB. These
illegal colours have to be limited because, like excess of any signal, they will upset the system. Likewise, the colour
manipulation available to video users may also turn legal colours into illegal ones.
The camera is unable by itself to produce out-of-gamut colours. It has to conform to the RGB points as shown on
the CIE triangle. The spectral colours shown would therefore be out-of-gamut.
It is primarily because of the introduction of digital manipulation that out-of-gamut colours have become of
concern. Hamlet produce a range of instruments that indicate out-of-gamut colours. The simplicity of such devices belies
the complexity behind colour reproduction. The correction of an illegal colour to comply with the standard may need no
more that a few percent of desaturation, or a slight shift along one of the colour axes, until the instrument indicates that the
video gamut has been complied with.

WHAT IS DIGITAL VIDEO?


Component and composite video are analogue signals. Information is carried in a time and amplitude sense. The
most common form of distortion is changes of amplitude that occur for a variety of reasons and in a variety of ways.
Digitising converts amplitude, or signal level, to a sequence of numbers, thereby eliminating the effects of such distortions.
Both component and composite signals may be converted to digital form. Digital video is, in fact, the conversion of the

33
established video standards from analogue to digital format. When the video signal is in digital form it is also makes possible
the application of computer technology, as in digital video effects.
The digital signal form does not have an amplitude parameter, at least, not one that is as sensitive to distortion as
its analogue counterpart. Converting from analogue to digital requires measurement of the amplitude of each pixel at pixel
rate, and this value converted to a digital number. The critical part of the digital system is the conversion process. The
signal is sampled to measure its level, a process that must be done at a rate at least twice that of the highest frequency of
the signal. In the case of composite, the sampling rate, as it is known, must be at least twice subcarrier frequency if the
colour is to be analysed correctly. Going back to bandwidth, if the signal's highest frequency is 5MHz, the sampling
frequency must be at least 10MHz, requiring the electronics to operate twice as fast as in conventional video.
A digital signal is a binary signal, that is it has two states; on or off, usually denoted as binary 0 or 1, these are
the bits of digital processing. The waveform level is measured at each pixel, and each level, or sample, is given a binary
number representing that level. There will be as many binary numbers as there are pixels. The sampling rate is fixed by
how accurate we want the level measurement to be.
Binary numbers are to the base 2, that is they only count up to two, using the numbers zero and one. A simple
binary series that will count up to 8, or take 8 samples, requires three circuits. Eight samples means that the system can
distinguish eight levels of signal and three circuits are required to carry this amount of data.

SAMPLE Circuit A Circuit B Circuit C


1 0 0 0
2 0 0 1
3 0 1 0
4 0 1 1
5 1 1 0
6 1 0 1
7 1 1 0
8 1 1 1

Circuit A carries the most significant bit, Circuit B the next, and Circuit C carries the least significant bit. Failure of
Circuit C would be the least disruptive, but loss of Circuit A would be catastrophic. The most significant bit is the one that
determines whether the signal exceeds 50% or not. If it does, the next step is to determine whether it exceeds 75%, that's
the next bit. If it does not reach 50%, does it reach 25%? And so on. This principle of breaking every part of the signal into
bits and giving a number to the bit series is the basis of digital signals.
The waveform of the dancer appears again in Fig 2.27, this time with a digital sample been taken of one pixel.
The sample measures the level as exceeding 50%, but less than 75%.

white 100%
75%
50%

black black
0%

a pixel sample

Fig 2.27

Every pixel level is given a binary number and if we require 100 levels to describe our signal (based on the
accuracy of 0 to 100%), this will require seven circuits. In fact, 100 levels is inconvenient, if we do the binary maths, we
find the number comes out at 128. Digital video has standardised on 8 bits, which is 256 samples. A compatible 10 bit
system is also used that allows for additional peripheral information to be sent.

PARALLEL v SERIAL
One way to send pictures is to connect the imaging sensor, pixel by pixel to the display. This is parallel
transmission and, with the number of pixels used in video, is not a practical solution. Composite video converts the picture
to a serial form. Component could be described as a three-channel parallel. But the digital standard includes serial and
parallel variants of digital transmission. Eight bit video may be sent in either form with each data bit allocated a circuit of its
own. This is parallel data and is shown in Fig 2.28.

34
video in A to D D to A video out

8 circuits of
parallel data

Fig 2.28

The A to D is the analogue-to-digital converter, producing an 8 bit parallel output using 8 circuits. At the
destination the reverse takes place to provide the original analogue signal. A more convenient answer for longer distance
transmission is to serialise the data into a bit stream and send it down a single circuit. The penalty is that the data rate is
increased by eight times. In other words the serial circuit requires eight times the bandwidth of one of the parallel circuits.
The cost benefits of either depends on a number of factors, but in general the advantages of serial outweigh the
disadvantages. First and most significant, it is far easier to send long distances... the multi circuit parallel system fails on
cost and also the problem of ensuring all circuits are accurately matched. Long cables may exhibit unequal propagation
characteristics causing the signals to arrive at slightly different times that will confuse the receiving apparatus at the
destination.
The data rate will depend on how fast the signal can be sampled and what level of accuracy is required.
The process of conversion is the most demanding; analogue to digital and back again, are the points most
vulnerable to error. Measuring discrete levels means that approximations are made. Take those near to black. Distinguishing
between 0 and 1% is a far more significant level change than distinguishing between 99% and 100%. Yet the digitising
system will give equal emphasis to both.
The effect is called quantising and is reduced by increasing the number of bits. A one-bit system can distinguish
only between two levels and quantising will turn the picture into two levels of black and white. In systems using 8 bit
sampling it is only signal levels near to black that will reveal quantising distortion, and then only under extreme conditions. It
is a question of deciding how much tonal resolution is required for a practical system. The maximum tonal resolution of 8 bit
video is:

1
= 0. 39% ... which is adequate for all practical purposes.
256

From this it would seem that the action of digitising a signal and back again, will inevitably distort it. Therefore it is
highly desirable to avoid needless conversion and re-conversion. As a general guide; convert to digital as early as
possible, and remain digital for as long as possible, ideally, right up to the very end of the chain at the picture display.
Video is more forgiving than audio in this respect because the number of levels, or bits, required is less. The
audio dynamic range is many times greater than the video contrast range. The ear, it is often argued, has a greater acuity,
and may perceive the very low levels of distortion of the digital processes, and purist audiophiles still promote the virtues of
analogue. Video is more forgiving, yet even here, the digital artefacts may quite suddenly become apparent to the eye. And
once seen, they will always be seen... that is the nature of the human psyche .
As with any system, care must always be exercised over the design and usage of equipment if the benefits of
digital video production are to be fully realised. For that matter, the same applies to audio.

SERIAL DIGITAL BASICS


625 and 525 digital component video is produced by applying a 4:2:2 sampling structure to the analog signal.
This process is defined by a sub-set of international standards ITU-R BT.601 and BT.656. (these were formerly known
as CCIR-601 and CCIR-656. The label 'CCIR601' is commonly applied to digital video coded in this manner.)
The luminance (Y) component is sampled at 13.5 MHz, and the colour difference components (U and V) are both
sampled at 6.75 MHz. With 10 bit quantisation, this results in a data stream of 10 bit words at a clock frequency of 27
MHz. If the signal source uses 8 bit quantisation, 10 bit data is used with the two least significant bits of each sample
code set to binary zero. This is to maintain the same data rate.
The quantizing levels employed in the analog to digital conversion are set to give 66.4mV headroom above peak white
and 51.1mV below black. Coded U and V signals have 50mV above and below their normal maximum and minimum
excursions.
The synchronisation pulses are discarded in the coding process, and are replaced by Timing Reference Signals (TRS)
whic h are inserted into the data stream to serve the same purpose. Two TRS's are used to synchronise the data

35
stream, EAV (End of Active Video) and SAV (Start of Active Video). These are placed at the beginning and end of the
horizontal video blanking period..
Each TRS consists of 4 words:
1) 3ff hex ie all '1's
2) 000 hex ie all '0's
3) 000 hex ie all '0's
4) XYZ, which determines the type of TRS pulse:
XYZ:
Bit 9: always '1'
Bit 8: 0 = frame 1 1 = frame 2
Bit 7: 0 = normal 1 = field blanking
Bit 6: 0 = SAV 1 = EAV
Bit 5: Bits used for Hamming correction.
Bit 4: Bits used for Hamming correction.
Bit 3: Bits used for Hamming correction.
Bit 2: Bits used for Hamming correction.
Bit 1: Always '0'
Bit 0: Always '0'
The period between EAV and SAV is not used by normal video and is available for other purposes eg: error checking,
timecodes or embedded audio.
Illegal Values: The values 0 and 3FF hex are used solely by TRS pulses (EAV and SAV) they must not appear anywhere
in the active video area.
Out of Gamut: Values apart from the illegal values which should not be used.
Luminance is defined as being between peak white, 700mV 3AC, hex and black, 0mV 040 hex.
Chroma is defined as being between max positive, 350mV 3C0, hex and max negative, -350mV 040 hex.
The values above and below these are termed out of gamut.
The data is serialised using an NRZ (None Return to Zero) code to produce a 270 Mb/s signal. This coding method
removes any low frequency component and is insensitive to polarity. The data has to be scrambled first to avoid the
possible transmission of all '0's.
This data is output at 800mV p-p to normal 75 ohm video coaxial cable.
Due to the high frequencies, the cable losses are quite high, typically 10dB per 100 metres at 270 MHz. To allow
acceptable cable lengths, automatic cable equalises are used at the receiver which usually allow up to 300 metres of
cable to be used. It is important that standard cable is used, otherwise the equaliser will not compensate correctly.
Suitable cable is: PSF 2/3 BELDEN 8281 F&G 1.0/6.6

EMBEDDED AUDIO OVERVIEW


The period between the EAV and SAV markers can be used to send embedded digital audio signals. This is known as
SMPTE 272M. Up to 16 separate audio signals may be sent in a single video channel. These are organised as four
GROUPS of four signals, the four signals are often two stereo pairs. Typically only one group will be used, giving two
stereo pairs of audio. The audio data is digitised in the sending equipment to 20 bits of resolution, usually at a 48 KHz
sample rate. Often only 16 bits are used in practice. The digitised data is arranged in packets which are placed in the
EAV-SAV space.

WHY GO DIGITAL?
The reasons are many. The one outstanding advantage that has existed from the early days of the technology is
protection of the signal from distortion and interference. As the systems have become established and costs have fallen,
so other benefits arise; picture manipulation and special effects, computer interfacing, non-linear editing, are but a few.
Transmission also benefits. Cables and circuits no longer need to operate from d.c. to the highest frequency, and
immunity from interference increases. Circuits can deliver quality signals within lower quality signal transmission systems.
Timing and synchronisation can become easier, as will be seen in the section on Production.
Inevitably, there is are shortcomings. The data rates are higher for a signal carried in digital form and various
techniques have been developed to improve this.

SQUEEZING DATA ONTO THE HIGHWAY


When we enter the realm of digital video, we enter a far more complex technology. We have to ask ourselves; 'to
what level am I prepared to study this subject?' Whatever the answer, a good grounding in analogue video is essential. As
the story unfolds some basic concepts come under fire; the digitisation process does not always sit easily with some
established analogue standards.
Digitising composite video is one such. The single circuit analogue standard, composite video, has reached its
zenith... some would say it is long passed. Transmission is about to take the digital form, and composite has no place in
that. But the standard will remain as long as there are TVs in the home that use it. Digitising the signal for recording and

36
playback is now common practice, the elimination of generation loss alone is worth the effort. But a standard that already
limits the amount of information to enable it to fit into an established system has many artefacts that compromise digitisation.
The ideal standard for digitisation is component. To digitise the three circuits, luminance and the two colour
difference signals, Y, Cr and Cb, requires a sampling rate based on the highest signal frequency, ideally four times that
frequency. Also, it would be wise, in view of international programme exchange, to remove some of the conversion
problems arising from having two world standards. So, a sampling rate that serves both 625 and 525 lines has been
chosen, which works out at 13.5MHz.
For lower resolution systems, lower rates are chosen. For instance, colour signals with half the resolution of
luminance require sampling at half 13.5MHz. i.e. 6.75MHz. On this scale, sampling the even lower resolution colour
equivalent to that of composite video, or domestic VCR, can be done at a quarter of the luminance rate, and still be
adequate. A family of sampling rates have therefore emerged based on 13.5Mhz and the factor '4' that relates them.
From this the component YCrCb is denoted as 4:2:2, because luminance is full bandwidth and the colours half
bandwidth, with the lowest quality trailing behind at 4:1:1. Considering the digitisation of RGB; this would require full
resolution for all three because each colour carries a proportion of luminance. RGB sampling is therefore 4:4:4.
Component video, YCrCb, digitally coded into a serial form has been adopted as the digital video standard known
as ITU 601(formally CCIR 601), and is one signal running at 270 million bits per second, or 270Mb/S. For comparison, digital
composite video uses rates of 177Mb/S for PAL and 143Mb/S for NTSC. It is important to distinguish between sampling
rates and the bit rates that result. The former is the frequency of sampling expressed in hertz (video is in megahertz, or
MHz), the latter is in bits per second (megabits per second, Mb/S, for video). The more samples the more bits.
At this stage it's worth mentioning wide screen. Increasing the horizontal size of the picture whilst retaining the
same resolution means more pixels. 625 line pictures have an actual horizontal pixel count of 700 plus. Translating into 16:9
video... the wide screen ratio, brings the count up to about 960 pixels, requiring a sampling rate of 360Mb/S.
High bit rates push the circuit bandwidth requirement very much higher, and some form of data reduction is
essential for storage and transmission of television signals. But the processing power available to the digital signal far
outstrips that of any analogue system. It is now quite feasible to assess signals for redundancy, or how much of the signal
can be discarded whilst still retaining its viability. Only sending information about picture changes is a powerful tool to save
signal space. For instance, the background of the dancer's set from Fig 2.3 may be still and is therefore information that
does not have to be repeated on a frame by frame basis. This is redundancy and can, therefore, be subjected to signal
compression.
Compression will, at some point, degrade the signal, particularly where the picture demands more space than is
available, e.g. there is too much movement. Such procedures have to be carefully evaluated before incorporation into the
standard, and the degree of compression will alter with circumstances.

JPEG AND MPEG


As data compression and picture quality are interchangeable, two assessment groups came together to make
recommendations. The Joint Photographic Expert Group (JPEG) determined for still pictures, the Motion Picture Expert Group
(MPEG) worked on moving pictures. JPEG looks for redundant data arising from similar pixel levels, MPEG compares
successive frames for commonness and transmits the differences. These are the main distinctions, but as developments
have taken place, cross fertilisation has spawned a mixed progeny.
Where MPEG discards data common to successive frames, it compromises the editing process: a single frame is
no longer complete in its own right, a succession of frames must pass to complete proper re-assembly. Cutting the frame
sequence breaks the MPEG routine. For this reason some non-linear editing systems use a version of JPEG called Motion
JPEG.
MPEG has become the standard video compression medium, and there are a series of variations.
MPEG 1 Aimed at CDROMs with 650Mb of storage giving about an hour of playing time. As a comparison,
without compression a CDROM would hold about 24 seconds of video.
MPEG 2 was developed for broadcast use and is of higher quality than MPEG 1. Component 4:2:2 coded into ITU
601 format runs at 270Mb/S. 4:2:2 in MPEG format runs at 20Mb/S to 50Mb/S. The bit rate limits indicate there are variations
within the standard. The MPEG system detects redundancy in the incoming video, that is areas of the picture that are
common to more than one frame. This information is sent only once.

37
Fig 2.29

Fig 2.29 shows an MPEG stream. The I, or 'intra frames, are non-compressed reference frames. The P frames, or
'predictor frames', have moderate compression and enable the heavily compressed B frames to be reconstructed. The
series repeats over 15 frames, therefore the MPEG frame is 15 video frames long. This is a real obstacle to editing; tapes...
and the scenes they depict, must be aligned MPEG frame to MPEG frame. One fifteenth of twenty-five is 0.6 seconds.
That's the editing resolution, and for an editor used to cutting to 1/25 second accuracy, MPEG compression suddenly poses
real problems.
MPEG 2 is therefore a useful transmission and storage standard but is limited in effects and editing operations.
Alternative versions, or 'profiles', of MPEG 2 have been introduced to offset some of these shortcomings.
Variants of MPEG will continue to appear as new demands are made and the technology continues to advance.
There are other compression standards created by manufacturers for their own systems. A camera and
recorder, or camcorder, may use compression specific to that design. Whilst that is a perfectly legitimate in the confines of
the camera, where that material is required to integrate with another format also using compression, interaction may occur
between the two giving rise to unpredictable artefacts.

MEASUREMENT OF DIGITAL VIDEO


Placing the signal into digital form does not eliminate the need to check and measure. Inside the digital package is a
video signal with the same attributes as before. Excessive signal, insufficient signal, illegal colours, black errors... not
forgetting the loss of signal altogether, exist as before and take the same operational and engineering disciplines to
eliminate.
After encoding into digital form, the signal may be subject to data corruption, for instance, a bit, or data string,
may go missing. In a practical system, it is quite reasonable to expect such errors to occur and the encoding process adds
safeguards to help the decoder re-assemble correctly.
Digital systems, whether signal processing or monitoring are in the main, reliable. A feature that stems directly
from the two-state binary operation, and the huge amount of experience drawn from the computer world. Where a digital
signal is most vulnerable is, paradoxically, in its immunity.
Whereas analogue will give warning of impending failure: the picture will slowly deteriorate, the digital signal will
not. By the very philosophy of its design, digital signals will ignore circuit flaws, even cover up for loss of bits, but only up
to a certain point. After that, when the errors swamp the digital to analogue conversion at the destination, the result is
complete loss of picture, or at least a permanent still frame.
To read the signal, the D to A converter must be able to identify which part of the signal is which. It does this by
first synchronising itself to the incoming data stream. If it fails to hold synchronism because of noise interference, the
conversion will fail. Ideally, D to A should take place as close to the end of the chain as possible, by which time, there will
have been considerable processing and routing, with each stage a potential source of error and interference. A re-timing,
or re-clocking, procedure at key points in the chain, 'cleans' the signal before passing it to the next stage. But by placing the
final element of the process right at the end of the chain, means that this very crucial process has been removed from the
care and attention of good test and measurement, possibly to the home environment.
Checking out a digital video circuit with a standard WFM is not possible. A stream of binary digits, always
changing place at megahertz rate without any of the order a WFM requires, is meaningless. So the principle of automatic
checking has become part of digital operations. For example, a VTR may assess the validity of the signal constantly during
recording. Tolerances and specifications are too tight for there to be any room for variation. There no point in recording a
signal that is marginally incorrect for this may compound to the point of total rejection at some later stage.
Automatic monitoring of the data stream with equipment that is able to analyse and indicate the state of the signal
is also one of the methods adopted by Hamlet. The whole operation is in digital form, there is no need to convert to
analogue, the monitor simply looks at the data. Internal check systems operate to ensure the digital form is itself to the
required standard and indicate if not.

38
However, the opportunity to observe the video waveform must still be available. Digital WFMs and vectorscopes
decode the data and display in the usual way. Test and measurement is therefore surprisingly similar to analogue. Carrying
the process one step further, the incorporation by Hamlet of digital video, and analogue... audio as well, into their In
Vision systems make test and measurement and picture monitoring extraordinarily straightforward.
So digital video does not remove the need to monitor. It does allow more things to be done to the picture. It does
allow better pictures. But only if the system is designed correctly and remains functioning properly. Digital video demands
the same degree of operational and engineering discipline as does analogue.

CONNECTING EVERYTHING TOGETHER


The background to video signals and measurement had been discussed at length. Now the practical side of
connecting together and building a system is considered. But one basic practical matter has to be dealt with first. The
connecting cable and what it means.
The camera is often the first contact with video for the newcomer, and the first contact with the video standard.
But before loosing ourselves in the excitement of picture making, let us clear up an issue that threatens to spoil the
satisfaction of these first tentative steps.

TERMINATIONS
The standard video connector is the BNC a bayonet locking connector which preserves the circuit characteristic
right through from cable to equipment. Proper termination is necessary to ensure the transmission of a video signal along a
cable of any length, with minimum loss and distortion. As part of this design standard, the source must be correctly loaded,
or terminated. Only when this is done will the signal appear at its destination at the correct level and with minimum
distortion.
The standard does not allow a source to feed more than one destination. A destination draws power from the
source, therefore, two destinations, or loads, would halve the power available, and result in low level signals, or dim
pictures.
The standard does however, provide an alternative; it is to split the signal by means of a distribution amplifier, the
ubiquitous DA. The source now supplies signal power to the DA, which in turn provides signal power to the additional
destinations. Many sources already have a DA included as part of the design. Some cameras have two outlets, but not all,
and those that do may not provide identical signals to both outlets.
Looking at the problem more closely, means delving into a little engineering to learn more about the video signal.

75 ohm

camera picture
75 ohm processor
processor

CAMERA MONITOR

SOURCE DESTINATION

Fig 2.30

Fig 2.30 shows in more detail how the signal passes from the camera to the monitor. Source and destination form
a circuit. There is the source processor, in this case the camera, the connecting cable, and destination processor, which is
the monitor. Note that source and destination may be any item of standard video equipment.
The 75 ohm resistors represent the source resistance and destination load, sometimes referred to as impedance.
At first sight, it may appear strange that two components that draw power only to waste it (resistors dissipate power in
heat), are included in the circuit, but this is an inevitable feature of cable communications. It is done solely to ensure
accurate reception of the signal with minimal degradation. However, if the destination does not present the correct load
value, the signal will not arrive at the correct level. If we conspire to attach two loads. i.e. two destinations, the signal
power will be divided and the level at each will be only 66%. If no load is present the signal will be twice the correct level.
These are very significant and unacceptable errors.
The termination of a cable and the signal it carries has to be the correct one: 75 ohm, and nothing else.
For those wishing to know more, they may continue the story elsewhere. Suffice to say here that video passes
from a 75 ohm source to a 75 ohm destination, and this is the reason why termination is essential.

39
LOOPING THROUGH
To overcome the restrictive inconvenience of one source/ one destination, we often see that the termination is
switchable, or removable altogether. In itself, this would appear to make the system very prone to misuse... and, indeed, it
does in the hands of the ill informed. To those who work in full understanding of the video system, however, the
advantages of flexibility are considerable.
The inputs of most professional equipment, such as picture and waveform monitors are 'looped'. A looped input
does not present a termination and does not draw power from the source, it merely 'observes' the signal. The signal has
still to be terminated somewhere along its route, so, where the signal feeds two items of equipment, the last in the line
provides the termination.

picture

camera termination

recorder

Fig 2.31

In Fig 2.31 each destination has two sockets internally connected, or looped, to enable the signal to enter and
immediately leave. The connecting cables link up in a kind of daisy chain from one item to the next. The termination, placed
right at the end of the chain, correctly loads the camera, and a signal of the correct level appears at both monitor and
recorder. Note how, in the case shown, neither piece of equipment actually load the signal, this function is left entirely to a
passive resistive termination occupying the final socket. In practice, both monitor and recorder will have switched
terminations. That on the monitor would be switched off whilst the recorder's would be switched on.
The standard termination load of 75 ohm is sometimes written as 75Ω. It is a value chosen to match the design
characteristic of the coaxial cable commonly used for video transmission. The standard connector is the BNC, a half-twist
bayonet locking type, maintains the cable characteristic right through to the termination load. Other types of connector have
also found their way into professional usage from the low -end domestic field, but of varying quality and reliability.
Looping through is a feature rarely provided on domestic or similar equipment for in these circumstances it is
more liable to misuse. A fixed termination is usual on such equipment. As domestic equipment finds its way into
professional use, the fixed termination has, combined with the 'T' piece, become an increasing cause of error. Principally
where it relates to small TV receivers doubling as monitors.
The 'T' piece is the most commonly misused adapter, it allows two cables to be connected to one socket allowing
additional destinations to be placed on one source. Should the monitor and recorder in Fig 2.31 be connected this way they
would double terminate the camera's output. This is the classic termination error. However, to the inexperienced operator,
the answer is a simple one. Re-adjust the monitor contrast until the picture 'looks right'.
The signal error at the recorder, however, goes undetected at minus 33% below the correct level. Until the edit
by which time it's too late.

40
Fig 2.32 shows the effec t on the waveform of termination error, (a) is unterminated, (b) is correct, and (c) is
double terminated.

UNTERMINATED CORRECT DOUBLE TERMINATED


(a) (b) (c)

Fig 2.32

Hamlet, aware of this sort of operational problem, have made available a range of user-friendly WFM and picture
measuring equipment, a typical single camera situation is shown in Fig 2.33 using a Hamlet In Vision WFM to show the
camera waveform on the picture monitor. Each piece of equipment has a switched termination, the Hamlet is looped... 75Ω
switched to 'off'. All the others are switched on.
Terminations are vital; they must never be ignored. Understand and use them properly to avoid the errors that
could so easily result in a re-shoot.

termination
switched on

HAMLET WFM
termination
switched on waveform
insertion monitor
with In-Vision display
of waveform
input
switched
to loop

recorder

Fig 2.33

41
SOCKETS AND TERMINATIONS
There is a range of methods for connecting video cables into equipment, with and without termination. The
following are based on BNC connectors, but other types may be found as well.

Single socket with fixed termination. The socket terminates the signal. Mainly domestic.
Single socket, unterminated Requires a 'T' piece either to feed on or to have BNC
termination fitted. Now less common.
Single socket with switched termination Requires 'T' piece to feed on. The switch must be in the
correct position to terminate or not.
Pair of looped sockets. The signal may pass from Common in broadcast equipment. The input can go to either
one to the other in either direction socket, a termination, or terminated cable must be placed on
the other socket.
Pair of looped sockets with switched internal Most common and the most versatile.
termination. NB... The termination may switch automatically on detection
of a second cable being connected to the second socket.
Check the labelling carefully!

It is well worth consulting manufacturers handbooks of the equipment concerned if in doubt about connections
and terminations.

This section has dealt with Analogue and Digital in standard definition as that is the most common system being
broadcast today.
However there is a new high definition standard in the wings ready to give the home viewer high picture content
and detail. There are at present a couple of HD analogue formats which are being replaced by their digital equilalent which
is know as HDSDI (high definition serial digital) there are over 15 different formats for HDSDI but they are all similar in
principle
HDTV digital component video is produced by applying a 4:2:2 sampling structure to the analog signal. The
luminance component (Y) is sampled at 74.25 MHz, the colour difference components U & V) are both sampled at 37.125
MHz.
The Y stream is quantised to 10 bits resolution and Timing Reference Signals (TRS) are added at the beginning
and end of the horizontal video blanking period.
The U & V streams are also quantised to 10 bits and then interleaved to give a C stream at 74.25 MHz. TRS are
added at the beginning and end of the horizontal video blanking period.
The 74.25MHz Y and C streams are then interleaved to produce a single stream at 148.5MHz
It can therefore be seen at 10 bit the transmittion rates are nearly 1.5Gbits or 1500Mbits where as SDI is only 270Mbits.
So a factor of 5increase in transmittion bandwidth is required, which can potentially cause problems with the cable
infrastructure which is easily being used for SDI including terminations and BNC connectors.

END of THE SYSTEM

42
PART 3 IN PRODUCTION
WORKING WITH CAMERAS

A few basic functions can be quickly checked when the camera is switched on. Viewfinder picture and
information displayed there; indication of light level, and battery condition, possible tape time, and others. However, it is well
worth the effort to read the handbook to acquire some background to the various functions.
Set up the camera with a suitable picture monitor. A broadcast camera will have a 'colour bars' function. Colour
bars is the standard video test signal and thirty or so seconds of colour bars at the start of each tape will be expected by
every edit suite producing work for broadcast transmission. The same applies to 'live' inserts. Colour bars will be used to
'line up' the camera's signal, confirm that is complies with the standard and will be accepted by all subsequent destinations
'down the line'. It is convention for colour bars to precede all programme material, and will be accepted by the destination
as representative of the picture signal parameters to follow.
Select 'colour bars' on the camera and note how they appear on the picture monitor. There is little the picture will
tell you about colour bars, so vivid are the colours to the eye that any attempt to guess at how 'good' the colour bars test
signal is will be wholly unreliable. A WFM is essential, a vectorscope too, if available.
Connect the WFM to the camera output and, remembering the termination requirements, connect up a picture
monitor as well. When using a Hamlet Video Scope 301WVA In-Vision WFM, 75Ω terminations must be connected to the
loop output.
With all connections in place, there should be a display of colour bars waveform and picture similar to those in
Fig 3.1 of 75% colour bars. Also shown is the luminance and chrominance only components of the colour bars. This is a
feature of the WFM enabling alternate display of the full colour signal and luminance only. A useful feature when observing
picture levels.

75% colour bars Chrominance only Luminance only


Fig 3.1

Check the level of the white bar is at 100% or 1.0 volt above sync level. Black is 0.3 volt above sync, and the
colour burst is 0.3Vpp (volts peak-to-peak). Variation between PAL and NTSC cause these values to differ slightly but
these may be ignored for the moment. Part 4 has the standards detailed. Also check the colour saturation. Observe the
green bar; count from the left... including white, green is the fourth. Check that its subcarrier envelope reaches down to
black level.
So far only composite has been considered. The monitor is using its composite input, as is the WFM. The recorder
though, will almost certainly use component if it is of broadcast specification.
The recorder may be integral with the camera: a camcorder. It may be separate with dedicated multi-function
cable to the recorder that carries the three-signal component video. The measurements all take place on the composite
signal and rely on the camera PAL or NTSC encoder being accurately set up for the purpose. A guarantee only possible by
good design and periodic maintenance.
To complete the test, record a half minute of colour bars and look at these played back. Some older recorders
may not playback reliably, providing no more than basic picture monitoring. Refer to the manual if in doubt. If this is the case,
prove the recording on a full specification playback machine. The playback signal should replicate the original for level,
black level, and saturation. However, should this not be so, do not assume that the playback machine itself has been
correctly set up. Some degradation will be evident, the signal may vary, but should in the main, be within 5% of the original.
If in doubt consult a qualified engineer regarding both your recording and the playback facilities you are using.
When the camera is switched to colour bars the lens iris may close, or 'cap up'. A mark on the ring denotes
where this point is, usually a 'C'. When switched to picture the reverse should take place. If not it is because the lens has
been switched to 'manual' or 'M'. The iris can be either under the camera's control when working in auto exposure, or
under the operator's direct control in manual.
Switch off colour bars. Select the lens to manual and close the iris. The picture on the monitor will be black. If
not... if it remains grey, the picture monitor is not set up correctly. In the section 'Working with the Picture', how to set up the

43
picture will be described, but for the moment assume the picture is OK. The waveform monitor should also show black.
Sync and colour burst will be there (this assumes the full waveform is available, some Hamlet products offer picture-only
waveform). Sync and colour burst are terms often shortened to 'colour black' or 'black and burst'. Look carefully at the
waveform. Note there is, or should be, a detectable level change from the video black of the front and back porches, to
picture This is the camera pedestal.

As the emphasis moves from the signal towards the picture so the term 'percentage' comes into use. And note
that the picture begins at black level, not sync level. This therefore places black at 0%. The WFM graticule may show zero
at sync level.
Pedestal should raise picture black by about 3% above video black, although the value may be user set. No
signal, apart from subcarrier chrominance in the composite signal, is permitted to go below video black at 0%. Any attempt
to do so will be removed by the camera's black clippers. Pedestal is a degree of protection against any scene black being
inadvertently clipped. The setting of pedestal is an operational choice, and is adjustable, either through a manual control or
via a menu function. If a menu is provided, the values shown may not relate to signal percentage levels, again the camera
manual should explain the relationship between the tw o.
The NTSC standard requires a fixed pedestal of 5 to 5½%, or 'set up' as defined in the NTSC standard. Pedestal
in the PAL standard is a user set function, permitting its use photographically rather than as an engineering one. Allowing
the operator to adjust pedestal in this way, however, does introduce a risk of error. Too low a setting and the effect is
excessive contrast in the pictures with the darkest tones pushed down into the black level clippers and lost. If recorded like
this the effect is irredeemable. Raising pedestal softens the darker tones and makes them lighter.
Both conditions are equally acceptable in the right circumstances. Operational pedestal is there to provide just
that, but to get it right demands proper adjustment. The figure of 3% is a very small level to measure accurately. Not all
WFMs have the means to do so... indeed, operating conditions in the field may make the procedure even more difficult..
Suffice for the present to set pedestal arbitrarily. Adjust the control: if its a rotational one, turn it... but not before noting its
position so that it may be restored. If the control is via a menu, note the reading. As the pedestal alters watch the picture
and watch the WFM. Moving the control in one direction, the picture blacks will become more grey, in the other the grey
tones will reduce to black, note how at 0%, the black clippers remove any signal attempting to go below black. This will be
seen particularly well on the WFM. Finally set the control to about 3%. Aim for 20mV (0.02V) in engineering parlance.
Now open the lens and expose the picture.

THE PICTURE
At this point the picture monitor must be checked for this will be used for the final assessment of the camera.
Make sure that stray light is kept from the screen, shield the screen if necessary. The main picture controls are 'brightness'
and 'contrast', and we will restrict ourselves to these for the moment. A simple test to prove how important it is to get the
adjustment of brightness correct is to raise camera pedestal and then reduce monitor brightness and see how one cancels
the other. This apparent interaction between pedestal and monitor brightness has been the cause of as much confusion as
incorrect termination. A similar problem exists between monitor contrast and camera exposure.
Proper monitor set up is essential. Many modern monitors address the problem by having pre-set positions for the
controls but even here, there is no allowance for changes in viewing conditions. Nor should the possibility be overlooked
that the pre-sets have been altered. Such is the precision required in monitor set up.
Brightness determines that when black is sent, black will be seen. This contradiction of 'seeing black' has to be
dealt with correctly. Capping the camera reduces our picture to camera pedestal, which is picture black. By raising monitor
brightness until this becomes grey, then reducing again until black, we will pass through the point of extinction... the
'blackest' possible from the screen. But the brightness control will go further... just as the camera pedestal control can
pass through black. Setting of brightness at the extinguishing point is not at all easy without some form of aid.
Colour bars offer nothing in this regard, the signal has no elements able to provide even the roughest guide to
brightness setting. What is required is a method that takes into account the individuality of picture monitors and, most
important, the viewing conditions.
In the absence of any picture set up aid, setting monitor brightness will be hit and miss. At least it will take
considerable trial and error before real confidence can be gained in the picture.
Raise monitor brightness slightly until the screen is seen to become slightly brighter. Remove extraneous light
from the screen to assist. What the monitor shows is camera pedestal. After careful study imperfections may become
evident. Shading... one part of the screen may be brighter than the other, or a slightly different colour. The effect may be
due to the monitor itself...its own shortcomings, in fact. The only way to be sure is to lower camera pedestal into the black
clippers, and leave true video black. Practise and experiment, study the WFM and picture. If all this seems rather
wearisome, there is a reason. It is all part of the learning process. Learning what the picture shows and what the WFM
says. Learning the difference between picture brightness and camera pedestal.
Now set the monitor brightness to almost extinction... but not quite.
Open the camera lens.
Point the camera at a suitable camera line up chart, or a Kodak Gray Card. A greyscale is well worth investing in
but there are unfortunately differences one to another, regarding reflectance values and colour. Not all greyscales are
grey! In reality the differences may be small and of little consequence but where more than one camera is involved and
standards of matching are higher, greyscale quality should be considered carefully. On the other hand, the simple grey
card made available world-wide by Kodak is consistent. It is based on a recognised photographic standard of 18%

44
reflectance, with a reverse side of approximately 90% reflectance white. Although only constituting two steps, the Kodak
cards provide a known value of reflected light input to the camera. The following procedure uses grey cards.
Light the cards as evenly as possible with a small light on a stand. The lamp power need not exceed 1kW. Be
sure to keep the light off the monitor screen. Open the lens iris until the picture appears on the monitor. Study the WFM and
recognise the various picture elements. The white card will be the highest level of the waveform, adjust the iris to bring this
to 100%, or 0.7V above black. The grey card level should lie between 40% and 50%. Quite likely the measured level across
the card will vary, and will be due to an uneven spread of light across the card. Even lighting is not as easy to get perfect
as it looks but aim to achieve a reasonably smooth coverage. The WFM will reveal any discrepancy much more readily than
the picture.
A camera designed to broadcast standard will place the grey card at about 50% when the white card is at 100%,
for a gamma of 0.4. Not all cameras will give a reading of 50%, some may read lower, say, 40 or 45%, indicating a higher
gamma. For more information about gamma see the section on Standards.
Increase the exposure until the white card is seen to limit on the waveform into a thin straight line at 100%, maybe
more. This is the camera white clipping coming into operation, a figure that will vary camera to camera unless set up to
broadcast standard of 100%. Many cameras exceed 100% in the desire to gain greater contrast range. If a camera
measures lower than 100%, its peak white clippers will need readjusting to the correct level.
The recorder back (if a camcorder) will be designed to accept levels higher than 100%. The standard, however,
is 100% and any excess will be removed before sending through the transmission chain. Allowing the camera highlights to
exceed 100% to gain the finest cloud-scape may be very laudable, but before entering the broadcast chain the white limit
will be imposed. White clipping is severe, for it does not reduce the signal overall, but removes everything above 100%.
This is drastic and will reduce delicate highlights to a bland white at 100%. It is for the operator to decide whether this is
acceptable or not before shooting.
The use of cameras with higher than 100% maximum output is now common. If the camera on test does so there
are two options. Set the camera to limit at 100%, thus giving one the opportunity to see the effect and reduce the camera
exposure. Alternatively, leave until the edit process and reduce the signal level on playback of the tape.
Camera clipper adjustment needs care. There are three separate adjustments; one for each colour, and if they do
not match, the clip will not be true white. Adjusting one alters the overall luminance value and getting all three to make 100%
takes patience.

COLOUR BALANCE
Readjust for correct exposure with the white card at 100% (not clipped). On the waveform, the white and grey
cards should show no evidence of subcarrier, which will appear as a thickening of the trace.
White balance is when the camera reproduces grey and white as truly neutral. i.e. no colour. Many cameras
have two in-built white balance points; one for daylight at a colour temperature of 5500K to 6000K, and another for
tungsten light at around 3000K. A third allows the balance to be carried out by the user. The camera measures RGB and
adjusts them until they are equal. In doing so it assumes the light entering is representative of scene grey. Auto white
balance may be available as a continuously updating option, as the camera moves. Although the results may be acceptable,
colour balance changes are sometimes abrupt and distracting. Automatic artifices have their place but the experienced
operator will often prefer to take manual control.

Manual white balance will require a grey, or white, to occupy most of the centre of the frame. Expose between
50% and 100%, avoid over exposure. Operate the white balance control and note the result on picture and WFM. Only a
vectorscope will show how the white balance pulls the grey to the centre of the display. Switch from tungsten to daylight
balance and note the WFM and vectorscope.
Colour casts occur because of gain differences in the colour channels. Colour black errors are due to pedestal
differences. Many modern cameras carry out both black balance and white balance, but this will not eliminate errors that
occur between the two. The eye is particularly sensitive to facial colouring too subtle for the WFM or vectorscope to
reveal. When black and white levels are balanced, any remaining grey errors are often attributed to gamma differences in
the colour channels. A percent or so of gain or level difference in colour channel matching will give a noticeable colour
shift.
The newcomer may not fully appreciate the value of all these checks and observations. Consider it part of the
learning process. The desire to fully understand. The eye may be very forgiving in some ways, but it is also extremely
discerning in others. The goal is to create pictures that tell the story in the way intended and that no artefact or flaw should
be allowed to interfere with that. The picture is all important. Waveform, like any measurement device, is a means to that
end.

45
SENSITIVITY AND EXPOSURE
These are industry standards and are used throughout television. Typical examples of light level are: Street
lighting; 10 lux. An average reading light delivers about 500 lux at one metre. TV studio light levels would be around 80 foot-
candles, say 800 lux. Modern cameras can work down to 10 lux and lower, but such light levels are not practical for
normal operations.

The standards for light measurement are:


1. Light power is measured in candelas, or candle power
2. The light from a standard candle illuminating a surface area is measured in lumens
3. The illumination of one candle power at one foot is one foot-candle, or one lumen/ sq. foot and is the standard
Imperial unit of illumination. The metric equivalent is the lumen/ sq. metre, or lux
4. 1 foot candle = 1 lumen/ sq. foot = 10.76 lux.

Light meters (or exposure meters) are usually calibrated in foot candles or lux, sometimes both. Light meters used
for film may be calibrated in exposure, f-number, and ISO film rating.
It is possible to rate a video camera as if it were film; allocate an ISO rating. However, video is a medium of
variables; the camera has adjustable sensitivity and gamma. The section on Production will give the background to this, but
if it is desired to rate the camera as for film, follow this procedure:

1. Set up the camera to view a grey card adequately illuminated from a constant light source.
2. Expose the camera to bring the card to 50% using a WFM or Vical. Ensure pedestal is about 3%
3. Read the f-stop on the lens.
4. Read the light level with an ISO calibrated exposure meter. Use either reflected or incident light.
5. Select the camera shutter speed to 1/50 second.
6. On the meter, set the exposure to 1/50 second (shutter speed) and set the f-number to that of the lens iris.
Adjust the ISO scale to the measured light reading. Read off the equivalent ISO rating for the camera.

The use of a separate light meter or the use of the camera's auto exposure, or indication, is a matter of choice.
Modern cameras are perfectly capable of computing exposure, but it is for the user to decide if the result is suitable for the
scene being photographed.

LIGHT CONTRAST RANGE


The dynamic, or contrast, range of a video camera is about 5 f-stops, where 1 f-stop is halving or doubling the
light value either falling on the scene or entering the camera. The lens iris is calibrated in f-stops. The following table lists
typical light/ signal output levels for a camera with a gamma of 0.4.

LIGHT INPUT SIGNAL OUTPUT


100% 100% (peak white)
50% −1 stop 75%
25% −2 stops 55%
12.5% −3 stops 40%
6.25% −4 stops 25%
3.125% −5 stops 3% (pedestal level)

Sensitivity is an indicator as to how well a camera performs in low light. The correct exposure for one camera
may not be the same as for another. Sensitivity is usually quoted as an f-number for a given light value. The lens iris or
aperture is calibrated in f-stops Changing the iris by one stop alters the light transmission by a factor of 2. To compare
camera sensitivity:
1. Expose a white card to 100%.
2. Measure the light falling on the card (or reflected from it).
3. Note the iris ring setting (the f- number).

Sensitivity may be quoted in different ways using various light levels and measured signal levels. The parameters
must be checked before comparisons are made.
Camera sensitivity is adjustable on the camera. Usually called 'gain', the calibration is in decibels, a hangover from
the electronic philosophy of camera design. It may also be expressed as a factor.

Decibels (dB) -6 0 6 9 12 18
GAIN (factor) ×0.5 ×1.0 ×2.0 ×3.0 ×4.0 ×8.0
SENSITIVITY (f-stops) -1 0 +1 +1½ +2 +3

46
EXTENDING THE CONTRAST RANGE
Various forms of contrast control are available to alleviate the problem of excessive scene contrast. The principle
is to reduce the rate of signal increase as it approaches 100%, causing compression of the higher tones. Peak white
clipping is still present, but its effect will be less severe and therefore less obvious. Fig 3.3 shows the principle.

100% clip level

85%

signal 50%
output
25%
12.5%
0%
-3 -2 -1 0 +0.25 stops
-5 -4 light input in f-stops

Fig 3.3

Contrast control reduces the rate of signal increase above 85% by reducing camera sensitivity above this point.
About a quarter of a stop is gained. Dynamic contrast control (DCR) automatically adjusts the parameter to follow changes
in overall scene brightness.

CAMERA SHUTTER
Modern CCD cameras have variable shutters. They may be mechanical or electronic, and stem from the need to
shutter the light from the sensor during the read-out time. Shutters are now adopted as an operational feature to control
exposure, or to reduce motion blur in stop-frame applications. The camera shutter sets the exposure time. The original tube
cameras exposed at 1/25 second per frame but for CCD's this is now more usually 1/50 or 1/60 second. The exposure at
other shutter timings are:

Shutter exposure, secs 1/50 1/125 1/250 1/500 1/1000


Relative exposure, f-stops 0 −1 −2 −3 −4

The 1/60 sec shutter is so similar to 1/50 as to make little practical difference to exposure.

ZEBRA
The camera's exposure indicator is zebra. Its name arises from the diagonal moving pattern that it is seen in the
viewfinder. Zebra appears at points in the picture where the level exceeds a predetermined value. The onset level is
adjustable. To measure it adjust the exposure of the grey card until the pattern appears. Read off the level off the WFM.
A good guide figure for zebra is 90%, so that warning is given of levels approaching 100%. A lower setting will
mean excessive interference with the viewfinder picture, although users may choose their preferred setting for onset of
zebra.

GAMMA
Gamma is a pre distortion introduced to offset the inherent distortion of the viewing cathode ray tube. It has
become a useful variable for those wishing to alter the tonal balance of the camera. Film has, over the years, made great
use of altering gamma for its pictorial benefits, but the electronic camera has the advantage that gamma may be changed
from shot to shot. Film gamma can only be adjusted chemically and is restricted to the whole reel (unless there is the option
to do an electronic alteration after processing). It has already been stated that the gamma of a broadcast spec camera is
0.4. The section on Standards sets out the background to gamma.
Most broadcast cameras make adjustment of gamma an option. To increase the density of darker tones, raise
gamma to 0.5. Expose the white card to 100%, set pedestal to about 3%. Measure where the grey card is on the

47
waveform. Adjust overall gamma to bring the grey card to 40%. Gamma of 0.5 has a 'low key' look; heavy shadows and
extended higher tones. Working with a high gamma produces those strong dramatic shots of the 'film noir' style. Colour
saturation will also increase in darker tones and the effect this has on skin tones must be born in mind. Working with
gamma lower than 0.4 produces a 'high key' look; pale shadows and milky high tones.
Lighting balance and control will be affected by altering gamma, an indicator of how lighting and camera are
interrelated in the total photographic process.
A menu driven camera is ideal for these procedures, if for no other reason than to restore to the starting
condition. This takes the worry out of changing settings. The final choice is the user's; whatever suits the shooting
requirements.

COLORIMETRY
This parameter has found more interest since the introduction of digital control in cameras. Colorimetry controls
the mix of colour. The parameter is factory set to produce the correct hue and saturation of RGB. Until menu control was
introduced, adjustment was prohibited because of measuring difficulties and the risk of getting it wrong. To be able to
press 'reset' and restore the original values makes experiment possible. Where this option is offered, adjusting colorimetry
is a matter of experiment. If possible make up one's own settings and store them in the camera's memory. Never attempt to
alter the default setting.

DETAIL
The sharpness of the camera is a user setting known as detail. It is not resolution, which is fixed by the number
of sensor pixels on the CCD sensor. Detail modifies black/ white transitions to enhance apparent sharpness. The
requirement arises from the small viewing screen, particularly of early TV receivers. Modern picture displays have
improved markedly, but detail enhancement has also been greatly improved from the less than subtle thick black edges that
were once so common around eyes and lips. By doing so, the designers have increased the complexity of detail
considerably.
Setting the detail parameters requires a good picture monitor, a little care, and patience. Refer to the camera
manual for specific instructions and watch out for the terms used. Detail is now accepted as a standard term by most
manufacturers but within the designation there are subsidiary adjustments with a range of terms applied to them. Onset of
detail is level dependent, for instance, it is not needed at, or near black, so the onset of detail is set to commence about
10% above black. But this is user determined, as is the amount of enhancement. More sophisticated is the automatic
adjustment that permits enhancing the eyes but avoids other features such as wrinkles and skin flaws.

A good monitor is essential. Some monitors and receivers add their own sharpening to the picture, a point to be
conscious of when adjusting that of the camera. Nor is this a task that can be assigned to the WFM for detail is very much
a picture parameter. Having said that, some camera manufacturers have laid out test card procedures for setting detail and
measuring the result. The final assessment, however, must always be confined to the picture. Experiment, but be prepared
to readjust for not all scenes benefit from the same settings.

A FEW HINTS
Generally speaking if a control is offered as a user set facility it is quite in order to try it out and see what
happens. But... always follow these simple rules:

• Understand what the control does before you disturb it.


• Make sure you are able to restore to its original position.
• A setting suitable for one scene or shooting situation may not apply to another.

And note.... there are many controls inside cameras, particularly older ones, that are engineering set-ups. Do not
alter these. The operational handbook will reveal only those that are user adjustable. The workshop manual will reveal the
others. The most significant elements of camera set up have been looked at.
Experiment, be patient, and above all, observe. Study your pictures and those of others. Learn about video
measurement and what it means to the pictures being created. Be methodical. Avoid being over-zealous. To not do these
things is too condemn oneself to the perpetual darkness of never to understand.

THE DIGITAL CAMERA


One may be forgiven for thinking that the all-digital camera had been overlooked in our discussion. Not so. What
has gone before still applies; all the parameters described above are still available. Operationally, the digital camera is no
more of a photographic device than its predecessor. It is more versatile, has more features but it is still essentially a
camera. This is not to belittle these remarkable advances in technology in any way at all. Digital processing offers huge
benefits, principle of which is the protection against degradation, whether in the image processing or recording. Great
strides have been made in picture quality, in setting fine and subtle picture parameters, and to hold such parameters in
memory for recall at any time.
What the digital camera cannot offer is to replace the photographer.

48
SIGNAL LEVELS
The camera, as described earlier, may be able to handle an extended scene contrast range that reproduces as
an extended video signal. Levels as high as 125% are not unknown. Where shooting conditions are not fully controllable,
out of doors with bright clouds in blue skies, extended contrast is an advantage. Modern CCD image sensors have a wide
contrast acceptance and the signal processing is designed accordingly. Where such programme material remains local and
unlikely to pass through international circuits, signal levels higher than 100% may be acceptable.
Broadcast organisations place a limit of 100% on all material, the final stage of the studio system ensures this has
been complied with by clipping the signal back to 100%.
Colour bars, which precede programme, live or on tape, is a signal level reference test signal. If pictures are
allowed to exceed 100%, colour bars can no longer be accepted as representative of picture levels. The destination, on
accepting incoming colour bars as correct, will impose clipping at 100%, before sending on. It is up to the programme
originator to be prepared to work within the limits set down by the video standard.
Measuring colour bars that precede all incoming and outgoing material for level and saturation, is part of normal
programme operations.

MORE THAN ONE CAMERA... STUDIO TIMING


Earlier, an experiment was described using two cameras feeding the inputs of a WFM. The idea was to illustrate
how the internal timebase clocks run at quite different rates producing an asynchronous result. The arrangement shown in
Fig 3.4 utilises genlock. The output of camera 2 feeds the WFM, via a 'T' piece, camera 1's genlock input. Note that both
cameras are terminated at the WFM inputs. The effect is for camera 2 to lock, or slave, camera 1.

genlock input

i/p 1
cam 1
i/p 2

WFM

cam 2

Fig 3.4

Both waveforms will be stationary although not synchronous; locking the two clocks does not mean they are
running in sync. Fig 3.5 shows how the waveforms may appear. In Fig 3.5(a) Camera 2, the master timebase, has had its
waveform aligned so the leading edge of the synchronising pulse ... that's the left hand edge of line sync of the waveform
is aligned to a convenient marker point. Fig 3.5(b) shows how Camera 1 is in advance, it arrives earlier because it has a
shorter cable, therefore its line sync appears earlier, i.e. to the left of the WFM display. So, although the cameras run at the
same speed there is a timing error.

49
(A)

(B)

Fig 3.5

The check out of the single camera made no mention of time. Timing is a relative parameter and, for an isolated
picture source, is irrelevant. A studio with more than one source, has a timing requirement, the pictures must arrive in
synchronism at the mixing or switching point, otherwise a confused sync pulse train will result. A source, whether
camera, or graphic, may be routed through diverging signal paths, through various video effects, but on arrival at the mixing
point, must be in synchronism with all other sources. Those with longer paths must start earlier.

50
main
output

monitoring output
studio
graphic mixer
suite
cap gen

test signal
generator
genlock

HAMLET
ext. ref. input
WFM

monitor
Fig 3.6

The studio develops from the simple experiment in Fig 3.4 to that shown in Fig 3.6. Four sources are connected
to a studio production mixer, or switcher. It is quite feasible to genlock the sources from a chosen master. A choice made
on grounds of convenience only. Say, feed a local camera signal to all the other sources, even to other areas to genlock a
remote system, such as the graphics system above. Yet when one considers the growing complexity that inevitably
follows multi-source design, test signals become a prerequisite. And as most such devices offer a multitude of output
signals, black and burst (sync and colour burst) among them, it is convenient to use this as the genlock source. Each
picture source therefore receives a genlock feed of black and burst.
The mixer has two identical outputs, one main, the other a monitoring feed to the test set and picture monitor.
Switching through the sources, timing differences will appear on the WFM, also locked by black and burst applied to its
reference sync input. All timebases will be locked together, but will not necessarily be synchronous.
For pictures to be mixed and inlaid together synchronism is essential, otherwise the various picture elements will
not register and an unbroken sync pulse sequence will not be maintained.
To achieve correct synchronisation Hamlet provides a feature on all units with external reference called Hands
Free Timing or “HFT”. This provides and easy way of ensuring each source input can be checked against the reference
signal being used without the need to continuously manually switch from internal to external reference on the waveform
unit.
In describing timing thus far, we have made no mention of colour, and it is now opportune to distinguish between
component and composite operations.

COMPONENT COLOUR TIMING


Component video demands accurate matching of the three circuits, R – Y, B – Y, and Y. Colour registration, that
is the overlaying of the colour signals, must be done accurately if colour fringing is to be avoided. Registration of the colour
and luminance signals is less demanding. If care is taken over design and installation, these requirements should not cause
difficulties and once in place, measurement on any regular basis should not be necessary. Component working requires
timing to what is called monochrome timing, and is based on an accuracy of 0.1µS. The term 'monochrome' correctly
describes 'one-colour', but in video the term means 'no colour', or 'black and white'.
For component operation the genlock supply of black and burst may still be used. The colour burst however, of
the subcarrier element of composite operation will be ignored.

51
To ensure the component timing is correct a special test signal is available called Bow Tie. When this trace is
studied through a system and displayed on the Bow Tie waveform mode of Hamlet units it is very clear if there is any timing
error or not. The Bow Tie display shows the individual component position with respect to the luminance part of the signal.
Fig 3.5(x) shows the picture of the signal used, Fig 3.5(y) and (z) show the waveform Bow Tie traces of the Y marker and
Gr (red component) and the Y and Gb (blue component) respectively of a timed signal. Notice how the marker coincides
with the null in the component trace. If either component was early or late the null would be seen to the left or right of the
null.

COMPOSITE COLOUR TIMING


The system set out in Fig 3.6 is a basic one but the principle may equally apply to composite video as well as
component. By far the most stringent timing requirement is for composite video. The complete colour spectrum is contained
in one rotation of the subcarrier vector, one metre of cable causes about 5nS of delay, or about 8° of vector phase shift.
These are rationalised figures based on the subcarrier frequencies of both TV standards. Unfortunately, cable
characteristics vary with temperature, age and wear and tear. These effects on a 50 meter length would be significant
over, say, a year's operation and the system design must take appropriate measures to deal with this. Composite systems
with long cable lengths, measured in hundreds of metres, require some form of automatic check and adjustment for phase
delay of subcarrier, to compensate for temperature changes.
Timing now becomes exceptionally complex. The principle of locking all sources, or genlock, in its simplest form
must be expanded to include the phase relationship of the colour subcarrier. Full use will now be made of the black and
burst genlock, with line timing secured by the line sync and colour phase by the colour burst.

TIMING PROCEDURE
Picture cutting, inlaying, chroma keying, is handled by the studio mixer (or vision mixer, production switcher,
picture editor, video switcher... there are an endless variety of names to describe this piece of apparatus). The process
requires timing accuracy to ensure the pictures arrive in synchronism; all beginning at the same point in their scans and all
ending together. Pictures are keyed or inlaid, mixed and faded to create the final image, with more than one video source
contributing to the final picture.
Consider component first as the most straightforward. The design will require the matching of cable lengths and
characteristics for the three component signals. When this is satisfactory timing is relatively easy.
Genlock will ensure that each source is locked and each source will have a control, usually conveniently
placed... yet not so convenient that it invites casual adjustment, and labelled 'H timing'. Other designations may apply, such
as 'H phase', 'line timing', or 'horiz', or simply 'H'. Adjust this control until the line sync of that source as measured on the
WFM is aligned in time with all other sources. Make things simpler... choose one source as master and align the others to it.
The job will probably require two people; one to read the WFM the other to make the adjustments. Hamlet have HTF...
hands-free-timing, to make this a one-man job. Note that the WFM 'looks' at mixer output, and it may require another
individual to operate the mixer if it is remote and there are a number of sources to set up.
Whilst carrying out this procedure, take the opportunity to check the signal; look at colour bars for level, see that
line sync is about 0.3V, check all the sources are similar. Variations will exist, experience will dictate what action, if any, is
required. Sync pulses may appear to be clean with good vertical edges. Magnifying them in a time sense shows this is not
so. They have a rise time. Their shape will alter depending on where they come from, and how far they have travelled.
Time the sources with the WFM set to magnify. Use the same point on the graticule, and take a point on the
leading edge... the left hand edge, and half way down the sync pulse, as in the waveform of Fig 3.7.

Fig 3.7

Composite operation uses line, or H, timing in an identical fashion to component, adding the colour phase element
to complete. Control of line timing and colour phase are independent of each other, a further indicator that even in
composite video, luminance and chrominance still remain discrete signals. The single cable connections have more stringent

52
cabling requirements. With cable length and characteristic so critical, a check of synchronism may become a regular
routine, particularly in mobile installations.

After completing the line timing as above, using the vectorscope, look at the colour burst from each source as it is
selected on the mixer. Again choose a master source and compare to that. The base setting of the master may be
arbitrary... synchronism is not absolute. The burst may be set on the vectorscope graticule by the instruments own 'trace
rotation' or 'phase'. Each composite source will have, in addition to H timing, another control called variously, 'C phase',
'colour phase', 'φ phase', 'CSC phase', or 'SC phase'. Using the same technique as for line timing, adjust all source colour
phase controls to align all colour bursts to the same position on the vectorscope as shown in Fig 3.7b.
Repeat the procedure as a check, both H and C. Levels too... don't miss the opportunity to confirm that everything
is satisfactory.
The general principle of video mixing is that where only one source is selected, say, a single camera, or a
graphic, the whole of that signal will be passed, including the sync (and colour burst where applicable). Where a picture is
produced from more than one source, the mixer must decide which sync and burst to use, how it does so will depend on
its design. It could insert its own black... the generic term to indicate all synchronising information but without picture. Black
is a production requirement, as in a fade-to-black, or cut-to-black. In such an instance, the mixer output will be no more than
sync (with burst if working in composite). A source of black and burst may be derived from the genlock distribution, or, in
some cases the mixer may generate black.
From this, we can see where the mixer itself has become a video source. The black it produces must also comply
with the timing requirement. In some mixers, black, because it is indigenous to the design, is declared master timing source.
One further source of picture material may be similarly mixer generated: caption 'infil'. Cut-outs, such as titles or captions, in
the picture may be infilled with a colour, or matte, 'infil' is sometimes known as 'matte' (with or without the 'e'). The mixer as
a source in its own right with its own black and matte generators, make this a central feature of the video production
system. Individual designs deal with this in different ways and consultation of the handbooks is recommended. Where
timing measurement is concerned, such features may affect the procedure. As internal mixer sources, black and infil, may
not have timing adjustment, cameras and other sources must be aligned to these.

THE SYNCHRONISER
At this point it is worth introducing one of the first digital units to arrive in television. The synchroniser is quite a
simple device; a video input and output, a genlock input, and adjustment of H timing and C phase. Usually video gain, or
level, and black level adjustment as well. But under this cloak of outward simplicity is a full frame's worth of video memory
and a processor that allows the video signal to be clocked out on another timebase at the initiation of genlock. In short, the
synchroniser takes an incoming source, on a remote timebase and transforms it to a local sync timebase. The output
appears the same as the input for the device is transparent, apart from the timing change. However, doing all this takes
time: the output is one frame behind the input. In other words; the signal is delayed by one frame.
Such a delay is, in itself, of no importance; the machine begins its picture output that much earlier. But one must
consider the other part of the video tape output; audio. The audio is not subject to a timing delay and will therefore be
advanced over the video. This may not be a great deal in audio terms, or lip sync, but where multiple synchronisers are
used in a complex programme chain, the audio will require delaying by the same amount.

53
CCU

CCU
camera
cables
CCU
picture
CCU sources studio
out
to
cap gen mixer

tape

studio
remote video synchroniser mixer

genlock
to all
sources
HAMLET

SPG
WFM

monitor
Fig 3.8

Studio design has progressed to that shown in Fig 3.8. Control units have been introduced. The camera CCUs,
capgen rack, video tape, and synchroniser, are controlled remotely from the production area, while the engineering resides
in the technical area. This aids test and measurement considerably. Timing is now confined to control units. Probably a one-
person operation to check out and set up. Composite operation is simplified because the cabling is shorter... a few metres
from control unit to mixer. Also, component benefits because the total cabling cost falls sharply.
Control units detach the actual picture sources from the mixer. Timing and level checks are confined to the control
unit, any adjustment of the picture source is carried out from the control unit, and the whole process is simplified. An
important point when one considers that studio systems use a multiplicity of paths for various effects, but with the
hardware all 'under one roof' the design is much tidier.
Test signals are still a feature of the new design but may be incorporated into another box. Fig 3.8 shows an
SPG, or sync pulse generator. This device generates test signals, black and burst as before, and also colour subcarrier if
that is required. The standard of accuracy will be higher than the earlier test signal generator. The SPG may be one of two
master sync sources, the one backing up the other for failure of syncs in a sophisticated system like this and the whole
production centre comes to a standstill.
The discussion about timing has overshadowed all else. That only reflects its importance for nothing will work
unless it is executed properly. There has been only cursory mention of pictures. When timing the signals, it is only the
'below black bits' that are of interest. Whether pictures are present or not really doesn't matter. The camera may be
capped, the capgen left with a few words typed in to identify it. No remote video is necessary through the synchroniser.
The tape machine, as will be seen shortly, may require a tape to be playing. In dealing with that part of video outside the
picture period, pictures are irrelevant. One of the very few times that that is the case.

54
Having said all that, a good engineer will not miss the opportunity to check, or at least take a quick look, at the full
signal. Put up colour bars, uncap cameras... where possible and convenient, put up a picture. Maybe no more than a
confidence boost, but the business is about making pictures.
There are still further improvements to be made to our already comprehensive system. A signal interface, such as
router or switcher. These are not mixers, they do not perform the complex manipulative operations of picture creation,
although design philosophy of the two is beginning to cross over.
A very basic but equally flexible system uses none of this. The jackfield is row upon row of sockets, sources
appearing in one and destinations another, with links across to connect. These are mechanical cross points where
interconnections can take place and the system re-configured to suit any style of operation. It allows more cameras to
brought in, or more tape machines. Complex graphics and caption equipment. A studio may be changed round in an hour or
so from a games show to a conference to a drama set.
From the timing point of view, the jackfield becomes a timed interface point; similar to a mixer. Cables are
therefore, very carefully cut to length to ensure signal delays are equal. The increase in cost and complexity must be
weighed against the benefits of increased operational flexibility. Many modern installations use electronic switchers in
place of jackfields, switching audio and communications as well as video. A digital switcher may roll up everything into one
complete signal as in a Serial Digital Interface, or SDI. Timing remains of paramount significance whatever the method.

TIMING A REMOTE SOURCE


Installation of a remote camera or other video source, can be dealt with in two ways. A distant camera may
operate a kilometre away on a standard camera cable fed from its control unit in a mobile truck. Timing would be the same
as for any other local source; at the control unit. A camera's CCU is the interface between the camera and the video
standard. The alternative to very long camera cables is placing the CCU near to the camera and using standard video
cable, a radio link, fibre, or whatever method applies to the particular situation.
In Fig 3.9 the remote source has to arrive at the production switcher correctly timed for a clock graphic to be
inserted. Should genlock be the chosen way to make the camera synchronous, the total distance must be considered. Two
cables will be required; one out and one back. The genlock signal will be delayed getting out to the camera, there will be the
same delay for the camera signal to get back. The camera must start its scan early to overcome both delays.
Genlock is more realistic over shorter distances. It requires an additional cable, and there may be limitations on
how much it can advance the camera's timebase. And it must be borne in mind the degradation both signals will suffer over
longer distances. The synchroniser is a far better answer to distant remotes than genlock.

INLAYING THE GRAPHIC


Fig 3.9 shows the time-line with the remote camera advanced, and the graphic and switcher on the line. The
graphic source has two outputs; clock and key. The clock video is full frame but only the clock face will be used. The key is
similarly full frame, black with a white circle mark where the clock will be inserted. It is this signal that cuts the hole in the
camera video to insert the clock by throwing the electronic switch at the right point on each line.
A key signal has neither sync nor colour burst. Its synchronism to the graphic picture is essential and cable
lengths are critical, and it may be necessary to install separate video delay units in either graphic or key to achieve exact
register of the two at the switcher output.
Key signals sometimes pose a problem for test and measurement. The presence of keys on a router may not be
provided for, and WFM monitoring will be difficult unless an external trigger (WFM sync) is available. Picture monitoring
does have a use with key signals: it is by far the quickest way to ascertain that the key is the correct one for an
application, quite valuable when working in a mutli-key environment. But this will require that the picture monitor has an
external sync input. A point worth consideration if keying from multiple sources is to be part of the operation.

55
timing reference
remote camera
remote camera starts
picture early

effects
switcher final image

graphic
graphic
generator

key or switching

timing reference

Fig 3.9

Timing is essential to picture effects. Where this takes place in PAL or NTSC, colour phase must be accurately
aligned from both sources if the colour integrity is to be maintained. The sync and colour burst of the foreground source
will normally be retained, the graphic sync and burst being stripped off.
This latter feature poses a problem in timing the system. Because sync and burst are from one source, the timing
must be checked at the switch unit inputs, or observe the output picture. Here, a quality monitor is particularly useful. It will
show source timing errors at the edges of the picture, and it will show colour phase errors in the graphic. Phase errors will
be evident from the difference in colour in NTSC, or saturation in PAL. WFM and vectorscope will indicate by how much.
This is an example of how useful picture monitoring is to timing checks.
Although some mixers may pass individual source syncs in simple cut mode, this is not the case for complex
inlaying or composting. In such operations, where sources may be switched at any point in active field or line, and it would
be unwise to change over black and burst at the same time. During a fade, or mix, with two sources superimposed, it is
unacceptable to allow both sync and burst to add together and result in high amplitude.

The mixer may use sync and burst from an external source, or reinsert from its own colour black generator. Both
methods dictate that master timing reference is the supply of sync and burst, and the correct way to time is at the mixer
inputs, not mixer output where perfect sync and burst will always be present due to reinsertion. There is more than one
way to time a video system but the underlying principle remains; to achieve synchronism of pictures as determined by the
respective sync and bursts.

56
THE GREYSCALE
The greyscale has already been mentioned with regard to camera check out. Now is the turn of multicamera
pictures.
Using a grey card has limitations; cameras have to be matched, and a greyscale makes that task easier. A
conventional greyscale uses step wedges from black to white, usually two running in opposite directions. In the centre of
the chart there may be a hole with a black floc filled cavity. The purpose of this is to provide a super black. As already
stated, greyscales are rarely true grey. A fact as much due to changes with age as from manufacturing differences. In
practice this is less important than achieving a good match between the cameras.
A multicamera studio utilises camera control units, the CCUs seen in Fig 3.8, providing control of lens iris,
pedestal, and colour from the production area, bringing this facility under control of studio lighting. The operational camera
control panel, or OCP, will have:

Iris... a 'throttle' type of quadrant control of exposure with a rotating knob to control...
Pedestal... generally known as simply 'black', sometimes 'lift', in studios.

Also, smaller controls to adjust:

Colour gain... red and blue gains.


Colour black... red and blue pedestals.

Gamma may also be available, as may overall gain to set the sensitivity. Other features may be provided, and
there will probably be audio communication with the camera operator.
Begin the greyscale process by arranging the cameras as closely grouped as possible. Provide a light similar to
those in general use in the studio, to light the greyscale.

light

greyscale

Fig 3.10

The set up is shown in Fig 3.10. The main purpose is to achieve colour balanced to grey, equivalent to white
balancing, and to obtain an exposure setting. The light therefore must be measured for level and colour temperature. Both
these must be representative of scene lighting. Tungsten light, fluorescent, or other form of gas discharge lighting, may be
found. Of these, tungsten is the most predictable as regards colour temperature, although this varies with the power
applied to the light, as explained earlier. Alter the colour temperature by fading the lamp up or down, re-adjust the light level
by altering the focus of the light; a conventional studio light has an adjustable mirror and/or lens to do this. Light the
greyscale uniformly and make sure it is normal to the axis of the camera group.
With a suitable lightmeter set the light level to a typical value for the studio, and set colour temperature to the value
required. Face the meter to the cameras to get a true reading of incident light level.
Camera sensitivity determines the light level, as does the lens iris setting. Lens iris affects depth of field and if this
is to be a pre-set value, the lens must be operated with the appropriate f-number. Depth of field, sometimes called depth of
focus, is the range that is in sharp focus. As well as f-number, depth of field is dependant on lens focal length, or lens
angle. The wider the lens angle the greater the depth of field. Depth of field is a user-chosen value, as far as that is
practicable.

57
Carrying out a greyscale can begin with a black and white balance on each camera. Expose all cameras to the
same signal level, referring to the grey background of the chart. Note the step wedge values at the same time to see if they
are the same camera to camera. Alternatively, one can choose the internal camera value for the light being used. Although,
be prepared to carry out final trimming afterwards.
Manual grey scaling will reveal a great deal about the cameras performance because the process demands more
user involvement and closer scrutiny.
Iris all the camera lenses to closed. On each camera set pedestal, or blacks, to about 3%, or as required. Use the
WFM and picture to match the cameras, selecting each camera in turn to the operational monitoring and measurement set.
Carefully check for colour errors in black. Use the WFM. If operating component the three H display of Y, Cb and Cr should
appear with pedestal lines at 3% for all three components. If this is not the case, adjust red and blue blacks on the camera
control panel to make them so. This error will show on the composite waveform as trace thickening with chrominance on
the pedestal line. Adjust colour blacks to minimise. Likewise, the vectorscope will indicate chrominance as a shift from the
trace centre. If control of green is available, centre it and use only red and blue at this stage.
The picture must have the final say. If the monitor has a 'mono' facility which turns off picture colour, this will be
an excellent guide to camera grey. Select a camera. Now operate the mono button on the monitor. If the colour changes
adjust camera colour blacks, red and blue, until no change is evident. Watch for differences across the picture,
compromise to achieve the most satisfactory result, concentrating on the centre area. Remember this is assessment of
camera black; picture alignment must be correct (see Working with Pictures) and your eyes accustomed to subdued
viewing conditions.
Open all the irises and expose to the same level, making the grey background about 50%, but avoid over
exposing the white into the white clippers. These can be checked to ascertain if they clip at 100%. Using red and blue
gains balance the colour to grey by using the vectorscope and/or waveform, and check on the picture. Also, at this point,
note what level the black steps are. If the greyscale has a super black, check that this is near to 3%. It may be a little higher
but should not be lower, if it is, the camera has an internal set up error. Check that all the cameras perform alike. Operate
the irises from closed to fully open and compare... a severe test but one that will pinpoint errors at the extremes. Observe
differences in lens operation as well as colour changes.
The object is always to get as close to a matching set of cameras as possible. Perfection will be difficult but it
worth aiming for. Choosing one camera as master will help, compare the others to that one. Perfection of course is never
attained but a great deal of this is about satisfying oneself that the cameras are good for the day's show.
And what of that greyscale at the remote camera? The same applies. Use representative light, if that's daylight be
prepared for constant change with the sun and weather. Line up with a greyscale in the same way as above. No amount
of trimming will achieve a perfect match with the other cameras so carefully grouped in front of their well lit greyscale. But
a match here isn't so critical; the distant camera is a distant picture. That camera may need adjustment on shot, as it
operates... another good reason for a CCU based system, but one must be prepared for its remoteness to be revealed in
the pictures. A fact of life that may not be so bad as it sounds.

THE DIGITAL STUDIO


Digital studios are operationally very similar to analogue ones. Test and measurement will be at analogue
interfaces, if these are present. If the system is totally digital, digital measurement sets are available to analyse and display
the signals in conventional (analogue) form as waveforms. Where differences are likely to be noticed is the lack of
adjustment required because the drift that has been a feature of analogue electronics is no longer present. Particularly in
digital monitoring.
The 4:2:2 component standard has assumed the greater part of the work. Yet composite video has, within the
new envelope of digital wrapping, received something of a resurgence. Whatever the standard that is wrapped up into
digital form, there remains similar requirements for level checks and timing measurement. An analogue signal once
converted is sealed against change, and that applies equally to errors, both level and timing. These must be above
reproach before conversion to digital. The timing of a digital studio may use genlock, albeit less precise if the requirement is
to maintain source video within a timing 'window' that is defined by the digital mixer inputs.
Signal routing, usually in the form of SDI (serial digital interface), may carry embedded audio and timecode as
well. Also the coming together of production video mixers and SDI routing is now more evident. Simpler production
requirements may make use of the router, or an extension of it, as a live switcher.
Digital operations will be considered further in post production.

VIDEO TAPE
Video tape is based on a mechanical system whose timing is quite irregular. Left to their own devices, VTRs
would run up timebase errors far outside video tolerances. The methods adopted to overcome the problem split two ways:
1. To lock the mechanics to the video frame rate by tape motor control.
2. To correct residual errors electronically.

58
video tape player timebase corrector TBC

H timing
video output video output
C phase
framing

genlock genlock

Fig 3.11

Fig 3.11 shows a tape machine with genlock applied. The genlock signal may be black and burst but only the field
element of the sync will be used to lock the motor to the video frame rate. The machine has a 'framing' control to bring the
tape into frame synchronism. A residual mechanical error remains, the tape to head velocity varies causing the off-tape
video to jitter about the point set by 'framing'. The signal passes from the machine to a timebase corrector where it is held in
memory for a period of time. If the residual error is 10 lines... equal to about half a milllisecond, the TBC will require that
amount of storage. The TBC releases that stored signal corrected to the stable system timebase as defined by genlock.
Pictures off the VTR will start about 10 lines early and will be subject to a TBC delay of the same amount.

genlock caption recorder


gen or
player 1 player 2 edit m/c

HAMLET

video mixer
monitor

timecode

edit controller

Fig 3.12

In Fig 3.12 the basic edit system has two play-in machines sourcing to a mixer, the output of which feeds the
record, or edit machine. Genlock is applied to all sources, each play-in machine has an integral TBC. Timecode links the tape
machines and the mixer to the edit controller. Timecode is a frame marking technique in hours, minutes, seconds and
frames. The edit controller issues instructions using timecode to run, rewind, fast forward, the machines, and to the mixer
to carry out transitions, effects, caption insertion, etc.

GENERATION LOSS AND DROP OUT


Most VTR suites use component or digital signal paths. Composite has largely been superseded, at least at a
higher level, because of the severe limitations imposed by generation loss. Each playback and record event is one

59
generation and incurs one level of signal degradation, caused by errors in transferring the signal between the electronics
and the magnetic tape.
Digital offers the greatest protection against generation loss, component trails some way behind, and composite
some way behind that. Generation loss is dependant on a number of factors. It has become more problematical with the
advent of smaller tape formats, particularly with regard to drop-out... tape flaws that cause loss of signal, although
electronic correction has done much to improve the situation. Digital recording overcomes the gradual deterioration of
generation loss, but still suffers from large scale drop out. By using more bits than is actually required by the signal, spare
capacity is introduced on the basis that if drop out causes loss of data it may be recoverable from elsewhere. The
principle, known as redundancy, is monitored by Hamlet digital measurement under the terms CRC (cyclic redundancy
code) and EDH (error detection and handling). Data redundancy is now a recognised feature of digital storage and
transmission. As digital systems are designed to cover data losses without obvious picture faults, it has become more
important to keep these errors logged.

DIGITAL ERROR DETECTION OVERVIEW


In order to check if the digital video signal has been received correctly a Cyclic Redundancy Check (CRC) can be made
on each frame in the generating equipment, this four digit number is then placed in a 'packet' and put in the EAV-SAV
space of one line of each field. At the receiving equipment the incoming video field also has a Cyclic Redundancy Check
number calculated, this value is then compared with the 4 digit number sent in the packet. If the two numbers are not
identical an error has occurred between transmission and reception of the signal. This type of error detection is known
as Error Detection and Handling or EDH and is defined by SMPTE RP165. In practice two check sums are sent per frame,
one for the active video period and one for the full frame.
A typical packet consists of:
The Header: (000, 3FF, 3FF) This always precedes an EDH packet.
Data ID: (1F4)
Block Number: (200)
Data Count: This contains the number of words that follow.
Active picture crc: 3 words
Full-field crc 3 words
Error flags: 3 words
Reserved: 7 words
Check Sum: This is used to test for transmission errors.

LEVELS AND TIMING


Preparing the system in Fig 3.12 for editing is similar to that already described for the studio. Video sources must
be timed to the mixing point. The component system will be described, where applicable reference will be made to both
composite and digital. Hybrids of these also exist; mixtures of analogue and digital, component and composite.
The line up method uses the signal path through the edit machine. This signal is the monitoring video, and is either
the record video, or the off-tape video when the edit machine plays back, as it does when checking a recording. Or, it may
be the EE (electronic-to-electronic) video transferred straight through the edit machine when at standstill. Therefore, the
monitoring video is effectively the video mixer output.
Begin with a test recording on the edit machine of colour bars, after checking them beforehand. Play back and
check again. Measure overall level at the white bar, check the colour components, they should fall into the WFM graticule
marks. Adjust the play back controls of the edit machine as required. Level may read as 'gain', use this to set to 100%. Use
the black bar to set black level, and set saturation (sometimes called 'colour') until the red and blue components fit the
graticule. See Fig 3.13. Remember, with composite monitoring, set saturation by observing the green bar, adjust so that the
chrominance envelope reaches down to black level. A recording machine has no timing requirement, if these controls are
present they should be left unaltered.
Correctly setting up the play back of the edit machine is important because that's the monitoring video used for
pictures and measurement. Fig 3.13 shows what to aim at.

60
Fig 3.13

Some distortion will be apparent on the play back due to generation loss. A digital recording will not show this but
watch the replayed colour bars carefully on the monitor. Watch for break up due to drop out. If its excessive, try out
another tape. The steady image of colour bars is a good test but may show flaws that actual programme may not. One can
only use judgement and experience here.
Composite playback of colour bars will appear very indistinct on the vectorscope, the waveform too, will show
distortion of the chrominance envelope. How does it look on picture? Does colour noise show, particularly in red? Again,
actual pictures will be more forgiving. But this demonstrates generation loss very well.
When setting the playback controls, pay attention to local requirements. Take care not to exceed 100% and don't
forget pedestal, or NTSC set up, if that's part of the operation (see Standards).
Now record black and burst the whole length of the tape, or as far as required. The edit tape takes the edited
programme and the presence of pre-recorded black and burst also provides timecode, enabling the edit controller to
recognise the edit tape position. Recorded programme should have colour bars on the first 30 seconds of tape which will
be used to set up subsequent replay machines. These will be used to line up the destination play-out machine. So record
half a minute of colour bars... check them first. Then record the ident, or countdown clock.

Setting up the play in machines for the edit will follow the same pattern as for the edit machine. Select each in
turn on the mixer, making sure the path is direct with no video effects in circuit. The mixer itself may be checked out. If
comprehensive monitoring is provided, check the signals at the mixer inputs to be sure the mixer is not adding errors of its
own.
Regarding timing; component requires only H timing. Play-in machines are treated as sources to the mixer as
described for the studio. Use the WFM selecting each in turn on the mixer to set H timing. Captions, graphics, and other
sources should be treated in the same manner. It may be advantageous to assign one source as master. Frame timing will
be required, as mentioned earlier, but most current machines will deal with this as part of the internal timebase function
with genlock.
The tapes to be edited will have... or should have... colour bars at the beginning. Use these to set up each player
for level, saturation, and black, and timing ( and C phase if applicable). Bear in mind that source material may vary. A
camera may have signal levels that exceed 100%. Although colour bars are a standard, programme material may not follow
the standard. Colour may also exceed the gamut of RGB if it is from a graphics source, or has been manipulated in some
way. The digital world helps here; Hamlet provide out-of-gamut warnings for illegal colours. If these arise they can usually
be adjusted by either reducing saturation, applying pedestal, or both. Beware that these remedial actions, whether
reducing player 'gain' to bring signals down to 100% or adjusting for illegal colour, changes the original line up so carefully
carried out barely a moment ago.
The video mixer should be adjusted to limit at 100%. Do not be tempted to adjust higher to pass the excesses of
the camera for these will only be removed at some later stage. Ideally, a dedicated colour corrector should be used for
such picture corrections, but installing such complicates the edit configuration, and will very much depend on the kind of
work to be done. As colour correction is part of the photographic process it may be better left to a later stage in post
production where a it can be handled by a specialist.
Composite playback H timing and C phase adjustments follows similar procedure as for studio source line up. The
line up procedure must be gone through after each change of play-in tape. Where the edit system uses other sources as
well, such as captions or graphics, line up as for studio sources.
The vision mixer must also be considered. How it handles sync (including colour burst) will depend on its design
philosophy. It is worth reading the mixer handbook to find out. Syncs must always be present at the output, even in black.
A quality unit for broadcast work will pay particular attention to sync, to ensuring the sequence remains unbroken or
undisturbed in any way. Complication arises when two sources are used simultaneously, as when two pictures appear on

61
screen together. This may be a wipe or mix from one to the other. The mixer design will apportion priority to one source or
another, it may choose to reinsert sync from elsewhere. Suffice to say; always be mindful of how the mixer performs
these functions.
Timing must ensure that sources align to each other, and to the sync to be inserted. If that be from a priority
source, know which that is. To identify, set up a wipe between two sources. Move the H timing of one of them and
observe the result on the WFM. One half of the wipe will slide horizontally. That's the non-priority one, and that's the one
with the H timing being moved. The other source is supplying the sync through the mixer. Priority may change depending on
how the mixer functions. Experiment is worthwhile to discover more.
Digital systems will also need timing, unless there is a full frame synchroniser attached to each input of the mixer.
Cost will probably preclude this option. Precise timing may not be required, the input may well cope with small errors, but
the principle remains: all pictures must begin and end at the same time if they are to be mixed or inlaid together. Be aware of
how VT machines operate as regards their timebases, and how they output in digital format. Detail instructions are not
possible due to the diversity of system configurations, particularly where analogue and digital hybridise to share the work.
Levels and timing requirements must still be complied with. To go into every nuance of design is not appropriate
here, these are the basic principles, understand them before going into the subject more deeply.
Component synchronisation is much simpler than composite and offers greater flexibility of sources and
manipulation. In the process known as conforming, original component masters are edited to composite for transmission.
The edit machine may be either an analogue or a digital composite recorder. Composite editing is still around, maybe in
digital guise, and the final element of PAL and NTSC comes under the spotlight. Colour framing, or SC-H phase.

SC-H PHASE
Chrominance and luminance have thus far been considered as discrete and separate signals. Composite video is
the addition of these into one signal. In reality, subcarrier is the master signal.
Both PAL and NTSC derive h t eir line rates of 15.625kHz and 15.734kHz respectively, from subcarriers of
44333618.75MHz and 3579545MHz respectively. The line rates are also precisely linked by the 625 and 525 line frames to
the field rates of 50Hz for PAL and 59.94Hz for NTSC. Thus, all the timebase elements from subcarrier down to field and
frame rate are related. Should subcarrier frequency change for any reason, so will line, field and frame rates in proportion.
Chrominance and luminance are therefore related and cannot be separated and rejoined without due
consideration of subcarrier phase and line timing. The relationship is known as 'SC-H phase'. The standard states that the
leading edge of line sync must be coincident with the positive going swing of subcarrier. And that is all that need to be said
about this rather obscure piece of the video standard. Until, that is, its implications for video tape are revealed.

More of the PAL and NTSC differences now become evident. Field rates and the number of lines in the two
systems influence choice of subcarrier frequency. The two frequencies were chosen to accommodate the existing black
and white TV systems, and to make dot pattern of subcarrier less visible on black and white receivers. The choice meant
an incomplete number of cycles of subcarrier in each field. Doing the maths... dividing the subcarrier frequency by the field
rate, gives NTSC a SC-H relationship that has an odd three-quarters of a cycle per field.
What does this mean? Three-quarters of a cycle is equivalent to 270° of subcarrier phase. The resulting field
sequence may be set out as in Fig 3.14

frame 1 frame 2

field field field field


1 2 3 4

0 +270 +540 +810 +1090/ 0


subcarrier phase angle

Fig 3.14

During one field, subcarrier completes only ¾ cycle, or 270°. At the end of field 2 it has moved through 540°...
more than one cycle. By the end of field 4 this has accumulated to two complete cycles and is back in the same phase as
the start of field 1. Therefore, it takes 4 fields (2 frames) for the subcarrier to return to the starting line-to-subcarrier phase
relationship. NTSC is therefore described as a Four Field sequence.

62
PAL colour framing is an Eight Field sequence, derived from the remaining 0.375 of a cycle of subcarrier that is
left in each field.
Colour framing has serious implications for video tape editing. Each tape player must be aligned to its neighbour in
a 4 field sequence for NTSC and 8 field one for PAL. Should field 1 from one player be aligned with field 3 from another and
edited together, the fields will match but the subcarrier phase will jump over the edit. The sudden shift of colour phase will
cause a colour 'bump' and a hop sideways of the picture.
In editing terms this is very restrictive. PAL is the worst off; four frames is a significant time span where cutting
points are concerned. Although the cut may be made anywhere once the tapes are aligned on the 4 frame sequence,
slipping a frame, or so, to tighten or loosen the edit is impossible.
Nor does the problem disappear with digital video. Digital conversion encloses the PAL or NTSC signal and inhibits
further adjustments. SC-H phase cannot be ignored whatever the system. The 4 or 8 field sequences are still there to be
aligned and timed as before with all the inherent framing limitations during editing.
The check of SC-H validity is beyond the conventional waveform. In their range of test and measurement
instrumentation, Hamlet have proprietary methods of measuring the relationship of line sync to subcarrier phase

One may be excused for ignoring SC-H phase. So small a thing... a slight hop of the picture... an insignificant
change of colour. Unfortunately SC-H is part of the standard. It is measurable. Not only by individuals, who may choose to
ignore it if they wish, but by VTRs and transmission interchanges as well. These do not ignore. They will reject any signal
that doesn't conform to the standard. Television transmission is too complex to allow one signal to compromise the whole
system and there is continual assessment throughout the chain, sometimes this is quite unforgiving. Each part depends on
all others doing their job correctly. And that includes the users of it.
SC-H phase is the final element of the synchronising pattern of video. It is the most complex with its origins way
back in the development of colour television and the need, as perceived then, to retain compatibility with monochrome
receivers. The description here is necessarily brief, further study may be done by those interested.

Today, far more use is made of component recording. Although this increases complexity with its tri-circuit
demands, the advantages are the avoidance of complex synchronisation and much improved picture quality, and, of
course, the restriction of the 4 or 8 field sequence is removed. The subcarrier based system was specifically designed for
single circuit transmission, and there comes a point when it begins to slide into obsolescence. That point has arrived in
editing and video effects.

AUDIO
What has already been said about video applies to audio; the need to prove and check. Use Hamlet audio level
metering to check line up tone. Many systems will automatically select tone at the same time as colour bars. if not, do so
separately. Check the play back level of the edit machine and adjust as required, nominally to 4 on a PPM, unless otherwise
advised.
Audio line up levels are described in the earlier part of this book. They may vary from one type of operation to
another. Audio is less regulated in this than video. But take care of the underlying principle and work within the system
limits. Stereo has variations in line levels based on the adding together for the mono listener, and how various organisations
apply them. Also take note of stereo differences signals; the differences A to B, or left to right. Be aware of guidelines
issued by programme commissioners regarding levels, compression, and stereo sum and difference signals.

THE FINAL CHECK


Having lined up the edit and both play in machines, try an edit. Use the start of the play in tapes with colour bars,
edit between the tapes and revue the result. Check levels, saturation, phase if relevant, on the test and measurement set
and the monitor, check audio levels as well over the edit. Observe carefully; look out for non-clean edits, disturbances to
the picture, listen for clean sound.
Editing may be simple, it may be complex. A simple edit may, quite quickly, become very complex. Producers may
make last minute changes. Whatever the reason the better prepared the system is, the better able it is to cope with
alterations. Understanding is essential. Know the system thoroughly.

WORKING WITH PICTURES


USING THE PICTURE MONITOR
Picture alignment is fundamentally important; how else can a picture based system be judged? Eyes are drawn
toward pictures, away from waveforms and vectors. Pictures command attention. Proper alignment of the picture monitor
is very important.
Modern monitors are in the main reliable in holding their set up, but that does not mean that the procedure has
been carried out correctly in the first place. Viewing conditions have the greatest affect on how useful the picture monitor
is. The eye/ picture relationship is interactive, like all other senses, the eye is not absolute. It constantly adjusts to

63
accommodate changes in our environment, and in so doing it is trying to offer the most satisfactory representation of the
scene before us. This is not what is required when assessing pictures. A standard has to be used.

SETTING UP THE PICTURE


There are three basic controls: brightness, contrast, and saturation. These correspond directly to the basic
camera controls: black level (lift or pedestal), exposure, and saturation.
This similarity between controls has been the cause of a number of unfortunate mishaps. For example,
brightness is electronically identical to black level, and has exactly the same function. Therefore, it is very easy to adjust
monitor brightness where the actual fault lies with the setting of camera pedestal. Contrast and exposure can also be
confused. It is not unknown for such errors to be responsible for expensive re-shoots.
Where picture monitors work together they should all look the same. Where the eye detects differences, the mind
will want to question why. Absolute line up, whilst being highly desirable is sometimes best subjugated to the best matching
of one monitor to another.
No two pairs of eyes are the same. The colour standard defines colour acuity but within a given tolerance.
Monitor set up is a critical operation and is ideally delegated to one individual rather than left to a group.
Picture monitors fall into grades. Grade 1 is the highest and will be found in positions where quality check is
carried out. Grade 2 is the more common, found in control areas and for more general use. The main differences between
these is resolution, but a modern grade 1 monitor will have precision set up... possibly automatic, and precise stabilisation.
Basic monitor performance has risen over the years, particularly where stability, colour rendition, black performance, and
tolerance to variable viewing conditions are concerned.
Picture set up requires specialised test signals. Colour bars should never be used to set brightness or contrast.

SETTING THE MONITOR WITH PLUGE


PLUGE (Picture Line Up and Generating Equipment) has been in use for many years. The form varies but Fig 3.16
shows a typical form. The background is black at 0%. The two vertical bars are 5% above black and 5% below black.
Setting brightness first, adjust until the darker of the two vertical bars is just visible. Alternatively, setting the
lighter one to be just visible, gives a lower brightness setting. The latter is appropriate where the video signal has fixed set
up, or pedestal, is in use, as specified for NTSC.
The 4-step pattern at the left hand side is for measuring monitor gamma and setting monitor contrast. In practice
the white patch at the top is the most useful. The manufacturer will state the maximum light output from the screen for
white level as measured in a specified way. Without measurement, the user must set contrast empirically to suit the
viewing conditions. Avoid setting too high as this will result in a softened image and tonal value distortion. The optimum can
be found by raising until the 'burn out' point is seen, reduce until the patch becomes clean once more.

Fig 3.16

Current monitor design places emphasis on menu set up. But beneficial as this may be, it does not preclude the
requirement to know that the picture is properly set up for the viewing conditions in use. What has been described applies
to all picture displays, whether CRT based or LCD, projectors and large screen. The test procedure will very quickly show
up errors and shortcomings in any form of display.

SETTING COLOUR BALANCE


Not all monitors provide easy access for this. Modern integrated electronics are very stable and the factory set
balance may hold for the life of the monitor. If that is so, it's for the individual to decide if a check is worthwhile. The eye is
forgiving; it may not see an error when one exists. If that is so, why spoil the illusion? As experience is gained the eye

64
becomes more discerning and when an error is recognised it cannot be ignored. The follow ing sets out the basic method
for those wishing to pursue monitor colour balance.
Red, green and blue are set at black and white in the picture display. Colour black, more usually known as 'cut-
off' in conventional CRT displays, is set for black, or near blacks, and colour gain for white. Use a monochrome (black and
white) test signal. Don't use colour bars, even with the colour turned off; the high level of saturation may cause errors in
the delicate balancing procedure. Vical is ideal; use the three-bar display for black and white setting. Begin with the grey
bar adjusted to 100%.
Conduct set up in darkened conditions and let the eyes adjust before commencing. Follow a methodical plan, in
the first instant, adjust only red and blue. Green should be considered as master... indeed in many designs, green is not
adjustable. Observe with care, identify which colour is in error. This will take practise. Set the darker tones first. If unsure,
move a control only after identifying its current position so that it can be restored.
A final point. CRT monitors have high voltages. Don't use metal trim tools. Take care near circuit boards and
around the tube base. Be particularly careful of metal jewellery coming into contact with exposed live parts of the monitor
interior. If in doubt, take advice.

ENGINEERING WITH PICTURES


Whilst the eye cannot appreciate absolutes, it has an exceptional ability to see differences in both luminosity and
colour. Where two values of light appear side by side it is easy to distinguish relative brightness’s. The same applies to
colour.
Two situations have already been described that used the picture to align source timings. in Fig 3.09 the camera
picture was inlaid with a graphic, and later there was reference to similar check of tw o tape players. Composite colour
phase errors shows as hue or saturation changes. The eye is sensitive to both. That is why these simple tests work when
two signals are shown 'wiped' together, or inlaid to form one picture.

H timing H timing

VTR 1 VTR 2

video mixer

sync source

waveform picture

second source

Fig 3.19

Fig 3.19 shows how timing in these instances appears on the waveform and the picture.
Colour phase could be dealt with in the same way, the phase error will show on picture, compare to the
vectorscope as well. The monitor may have the facility to select between 'delay PAL' and 'simple PAL'. Simple PAL is less
sophisticated and does not desaturate but allows the alternate lines to show the phase error, one leading, the next,
lagging. At a distance the eye integrates and effectively carries out the desaturation process itself. Close inspection of the

65
screen will show the alternate line phase error. When the C phase for both sources is the same, both halves of the
screen will appear the same.

The picture is an essential part of video. The more it is used, the more use that will be made of it, and the better
will be the experience of the user. Make comparisons, picture to waveform, to vectorscope. Understand what they do and
how they work. Practice and gain confidence by using all methods of test and measurement. The quick double check on
picture takes experience, but that's security.

66
WORKING WITH TEST SIGNALS
WHAT IS DISTORTION?
Picture manipulation could be described as distortion but one would be hard pressed to convince picture creators
that their hard work is no more than distortion. So what is distortion and how does it occur? Strictly, distortion is unwanted
alteration to the signal. From the insidious increase of noise to the bending of non-linear transmission, distortion can take
many forms.
Test signals are designed to prove a system or circuit and to reveal faults. The design and choice of test signal is
quite important; too severe a test may indicate a fault where, to all practical requirements, none exists. One example is
applying audio standards to video. Video is far more forgiving as regards some forms of distortion but not others.
The use of colour bars as already been described in this handbook for it is the most widespread of all video test
signals. Yet it has singular shortcomings... quite apparent from its misuse. For example, its use in picture assessment or
picture set up can be positively misleading. Colour bars is an example of a test signal that functions to prove the
transmission systems at its limits; to prove that a circuit is capable of carrying all elements of the signal without distortion.
Distortions break down into two main categories: Those that modify the signal and those that add spurious signals. Noise
falls into the latter category, but is rarely measured on a routine basis in studios. However, this would form a regular test of
a long distance transmission circuit. Fig 3.21 shows three types of distortion as applied to composite colour bars. The first
two alter the signal, they are HF loss, and non linearity. The third is the spurious addition to the signal.
The HF loss is reduction of signal high frequencies, and shows most noticeably in the colour subcarrier envelope.
Note the gap between the green bar and black level is not, in this case, a pedestal error but loss of subcarrier amplitude.
Non linearity in the centre is revealed by the lowering of the white bar. At first sight it appears as simply low level, but
compare the step heights... view the waveform Y only, and see how they compress as the signal approaches white. The
third is caused by electrical disturbance from other circuits or equipment inducing noise into the video circuit. DC restore
will remove most of the effect, if not all, but in serious cases the video may become overwhelmed.

The three conditions are relatively easily spotted and well illustrate why colour bars has become the universal
test signal. Yet each of the conditions is not easily defined in terms precise enough for an engineer to describe. Each
condition requires a dedicated test signal for that. HF loss is best dealt with by pulse and bar.

PULSE AND BAR


This signal was designed to measure the very complex frequency characteristics of long cables. These are not
faults in the strict sense, but rather the normal effects of cable transmission. Because the cable characteristics produce
predictable effects it is possible to define the corrections needed to offset them. The pulse and bar method therefore
makes use of a special graticule on the WFM against which the distortion is measured and a 'K' factor issued for which the
appropriate correction can be introduced. See Fig 3.22.

Fig 3.22

The bar functions at lower frequencies. Its half line duration is designed to show up circuit faults that cause 'sag'.
The DC restoration imposed at each interchange will correct for sag over more than a line but will be unable to deal with LF
distortion taking place within the line period.

RAMP
The ramp waveform is a steadily increase of signal level from black to white, usually left to right, and is often
used in camera engineering set up. It is a picture test signal to show unevenness, or non-linearity, through the range of
picture tones from black to white. Fig 3.23 is of a ramp signal

67
Fig 3.23

A fault may go undetected because the picture conceals it. That is not to say all is well, a fault is a fault and
should at least be investigated. A condition observed today, may become a serious problem tomorrow. A test should reveal
the extent of the problem and determine whether action is required or otherwise. Dealing with a fault could mean down-
time; that alone will decide the course of action. As will picture content: does the fault spoil the picture? Decisions like these
may be more than engineering ones. The programme maker would have an opinion, and may ask advice. What will the
effect be on the programme? The answer may be another question: what is required of the picture?
No clear cut answer exists. Each situation has to be dealt with after knowing all the facts.
Non linearity distortion has a marked effect on subcarrier phase, and is precisely why PAL was developed. The
reaction of two engineers, one in Europe and one in America might be quite different. The PAL system would conceal but
not correct, and the engineer would probably ignore the fault until a more convenient point. With NTSC the same fault could
change the colour of a person's face, and the decision for using such a distorted picture would be far more problematical.

The principle of test signals is to produce a waveform that either 'looks right', or can be easily checked against a
measuring graticule, like the pulse and bar above. The vectorscope is another good example with its six colour boxes for
colour bars. To look right (or wrong) by eye demands some degree of experience on the part of the user, and a great deal
has been made of this need to practice and gather experience throughout this publication. Test signals that say 'yes' or 'no'
are examples of those designed for the less experienced, or to enable quick and accurate decisions on circuit or system
acceptability. Examples that fall into this category are the self-checking of digital circuits that throw up error flags. The
Hamlet Digiscope and Monitorscope both check for bit redundancy errors and will issue a record of such events with
time and date. What is done with the record, of course, is up to the individual, to act upon or ignore. That's the human
response... the facilities can only report.
All test procedures offer this to some degree; one soon gathers sufficient understanding to form a pretty good
idea about how well a system functions, just by a quick glance. However, it is when one has to communicate the details of
the fault that problems arise. What is 'low'. How high is 'high'? Can you state in dBs? Writing a fault report. Describing a
condition over the phone... most regularly occurring errors arise over a transmission links, to pinpoint means going over the
various sections. Forming a standard description is part of the test and measurement procedure. Some operations will use
voltage, others, percentage, and so on. But this, in itself is not the problem. What is required is to be methodical, to
understand the system, and to be precise.
In many instances a distortion will not be evident until a test signal identifies it... which is at as it should be. A
check out beforehand, however quick, could identify a fault before it becomes a problem. Colour bars are invaluable
because they are so common one soon becomes familiar with how they look in waveform and as vectors. A matter of
careful observation; more than casual checking the white bar reaches 100% and the green envelope reaches black level,
but learning the form and shape. As an example, the seventy/ thirty balance of sync pulse to picture height becomes so
familiar that a slight error will immediately reveal itself to the experienced observer.
A lot of faults need never happen. They are caused by incorrect operation; a predicament only alleviated by
knowledge. Faults arise because equipment wears out or is not properly maintained. Or it is abused in some way. Take
heed of what the manufacturer recommends. A check as to what invalidates the guarantee will say a lot about the design
limitations.

theoretically should never happen. Composite video was so designed as to make the alignment of the colour burst
to chrominance perfect. Generated as one, both receive the same treatment through the same circuitry and transmission.
Or do they? Video mixers often separate them in the various processes of mixing and inlaying. The WFM does not reveal
the error but the vectorscope does. Of more concern is that if this signal is converted to a digital signal to protect it in
hostile storage and transmission environments, the error will likewise be protected. Make sure the signal is correct before
conversion because faults cannot be put right without de-conversion.

68
BOWTIE
This test signal is a dedicated component check of the 3-circuit levels and timing. As already described, three
components must be sent over accurately matched circuits. Short cables of a few metres will not exhibit practical
differences assuming they are of the same length, of good quality and in good repair. This test is of particular value to
complex routing and sending through video mixers. It is an example of differences in the three circuits being quickly
revealed by changes to the signal appearance, as in Fig 3.27.

Fig 3.27

OTHER TEST SIGNALS


A whole range of test signals are available for specific measurement.
Step A series of equal value level steps from black to white for checking linearity Fig 3.28. The straight line ramp
signal in Fig 3.23 reveals non linearity but Step can put a value to it by measuring the change of step height.

Fig 3.28

Multiburst A frequency test Fig 3.28. A series of spot frequencies one after the other along the line reveal
unevenness in frequency response. Useful for analysing signal degradation through video tape.

69
Fig 3.29

Flat field A signal of constant level in line and field for testing dc circuit response Fig 3.30. Any deviation from the
straight and level waveform indicate circuit shortcomings. This is also a demanding test of the picture display for evenness
of brightness and accuracy of grey over the screen area.

Fig 3.30

A FEW TIPS
Not all situations demand specialist test signals. Most cases can be handled quite well with colour bars for
circuits and systems, and ramp will suffice for the camera. What is important is to have reference signals which can be
your criteria. If precise repeatable signal levels are required from day to day regular test procedures should be put in place
for all to follow. Such procedures fall into both operational and engineering categories and are attended to by every one
who wants to achieve excellence in vision.

Intelligently used, the basic test signals will reveal a great deal and will enable the user to conduct worthwhile
tests and measurements to check system validity.

A flat field can be simulated by camera pedestal. Capping the camera and raising black level as far as it will go,
say, up to 50%, will show picture monitor colour purity errors. This, of course, assumes that the camera has a precision
black signal but a check on WFM for constancy of level, and vectorscope for any deviation from true grey, will give
confidence.

Using colour bars to check frequency response is very common. Make sure the signal is accurate beforehand.
Watch out for black level variations that may give false saturation results if one is using the 'green bar to black level' check
in composite. The best indication of a drop in chrominance due to frequency response is to observe the yellow and cyan

70
bars of 75% bars. The tops should be level with white at 100%. The vectorscope easily shows frequency response in
colour bars because the colours fall short of the graticule boxes.
Level measurement usually takes place against the WFM graticule. Some systems provide for a quick and
effective measurement by comparison. A lot of measurement is for matching purposes; source to source, for instance.
Here, it is more important to have matching black and white levels than to get the last 1% of absolute accuracy. Select one
source to one WFM inputs, and adjust to the graticule as closely a possible. Now select another source to the other input,
and flick between the two. See how easily the differences show. Do the same with the vectorscope for phase and
chrominance errors.

This technique is similar to using a picture monitor to show differences. Use both waveform, vectors and
pictures. Don't expect everything to agree... an unfortunate fact of life that nothing is perfect. It's a matter of pushing the
errors beyond the point of perception.

71
PART 4 STANDARDS

GAMMA
The cathode ray tube gives rise to video gamma. The non linear characteristic results in compressed dark tones
and stretched lighter tones. The result of this in black and white television was to make for more dramatic pictures, a
benefit in the less than perfect days of early TV. Another advantage was that noise picked up en route from studio to
viewer was reduced. With the advent of colour, gamma compression was not acceptable. However, to maintain the noise
benefits it was proposed to correct for picture tube gamma at the camera. The result is a video system with a pre-distorted
signal. This means that all picture sources must operate with this pre-correction, including graphics and caption generators.
Over recent years the principle of precision gamma correction has become eroded. The value to picture creators
of adjusting gamma has brought about a relaxation of the standard. It is, nevertheless useful to know what gamma is.

FIG 4.1 PICTURE AND CAMERA GAMMA

The graph shows the picture gamma and camera gamma overlaid. The picture curve is light output from the
screen against signal input (in rationalised values). The camera curve is signal output against light input. In both cases, the
relationship is:
Input γ = Output , where γ is the gamma value.

The average picture tube has a gamma of 2.2. Correcting this requires a camera gamma of 0.4, and is the
standard value used in broadcast cameras. Note how the curves are mirrored; the straight line is the resultant, the
corrected distortion, and represents a gamma of unity; the ideal.

TV STANDARDS
Apart from line and frame rates, the differences between NTSC and PAL are numerous. NTSC uses IRE values
as a measurement unit for level.

IRE units 0 2 5 10 20 50 70 85 100


(or percentage)
Volts 0 0.014 0.035 0.07 0.14 0.35 0.49 0.595 0.7
Ref to video black
Volts 0.3 0.314 0.335 0.37 0.44 0.65 0.79 0.895 1.0
Ref to sync level

The NTSC uses IRE units to define level. It allocates slightly more of the signal to picture than sync, but as 7.5
IRE units of picture are taken by forced set up (or pedestal) the result is about the same for both standards.
The definitions of the standard vary. Although the differences are small, to the measurement process they are
significant, particularly if checked by an automatic process.
The line sync period extends from the end of picture on one line to the start of picture on the next. This period is
often called 'line blanking'. Except for the line sync and colour burst the level is at black.

72
FIG 4.2 PAL LINE SYNC PERIOD

FIG 4.3 NTSC LINE SYNC PERIOD

Field sync is a sequence of half line frequency pulses called the broad pulses because of their length.
Equalising pulses are the half line pulses either side of the field sync that ensure the field sync is in the same relative
position every field. Some of the unused blank lines after the field sync may be used for test signals, video tape timecode
and identification information.

PAL Field Sequence


Field Blanking Period = 25 Lines
Field Sync Period = 2.5 Lines
Fields 1, 3, 5 and 7, or 'odd' fields, are identical. Fields 2, 4, 6 and 8, or 'even' fields, are identical, with phase of
subcarrier reversed. See SC-H.

NTSC Field Sequence


Field Blanking Period = 20 Lines
Field Sync Period = 3 Lines
NTSC Field 1 is identical to Field 3. Field 2 is identical to Field 4, with phase of subcarrier reversed. See SC-H.

73
FIG 4.4 PAL FIELD SYNC PERIOD

FIG 4.5 NTSC FIELD SYNC PERIOD

74
FIG 4.6 SDI Line relationships

75

You might also like