How it all began
In a companion note - “The birth of digital transmission
and distribution” - we looked at the early forays of the BBC into digital
distribution and particularly the setting up of what came to be known as the
68PAL project – a pilot project looking at the opportunities offered by the
digital distribution of television and radio signals.
The purpose of this note is to provide a little more
information, for those who are interested, on the technical realization of
the terminal equipment developed for the trial. The reader is warned that a
reasonably technical background is required to get the most from this
The outline specification of the 68PAL terminal equipment
was as follows:
- We wanted to be
able to carry two full broadcast-quality TV signals (BBC One and BBC Two),
including associated sound, within a standard 140Mbit/s multiplex.
We also wanted the equipment to be transparent to any data signals carried
in the field interval of the TV signal (mainly used for transporting
Teletext information) but there was no requirement to carry the Insertion
Test Signals (ITS)
(Click the number in square brackets to see the
"footnote" in the right pane).
- We aimed to get a
very high degree of ‘transparency’ – with no perceptible impairment
to either sound or vision signals with at least three coding and decoding
processes operating in tandem.
- The system was
required to be tolerant of degraded or disturbed analogue sources and, in
their presence, should not demonstrate appreciably greater subjective
impairment than wholly analogue circuits.
- We also wanted a
standard 8.448Mbits/s interface to be available to carry other services,
including a number of audio channels that could be used, for example to
distribute our FM radio channels.
- We wanted to be
able to add or remove one or more of the major components of the
asynchronous multiplex without having to fully de-multiplex and decode the
entire 140Mbit/s bit stream.
monitoring of key transmission parameters was needed, including the
measurement and reporting of transmission error rates or any framing
losses or input signal losses.
- As the equipment
was likely to be moved around reasonably often, then it needed to be
reasonably transportable. Full portability was recognized early on as an
- Monitoring and
supervisory requirements were required to be good enough for use in
operational environments and we wanted to include built-in diagnostics to
ease maintenance. Although the initial equipment was not designed or built
to the full specification normally required of operational equipment, it
was designed to minimize the amount of additional work required to get
The Structure of the Multiplex
140Mbit/s signal (actually 139.264Mbit/s) comprised two identical
“packages”, each of 68.736Mbit/s.
Standard commercial equipment was used to combine and separate the 68Mbit/s packages (see
The BBC designed and built the multiplexer used to
assemble each individual 68Mbit/s package. The multiplex comprised:
- A 53.2Mbit/s
for the video.
- A 676kbit/s
tributary for the stereo TV sound
- An 8.448Mbit/s
tributary for additional audio and data signals.
- A 4.096Mbit/s
tributary allocated for error-protection but which could also be made
available for additional data should error-protection proved unnecessary
(they were optimistic days!).
- The small remaining
capacity was used for various control, signalling and synchronization
gives an overview of the transmit terminal. All the signals are
asynchronously multiplexed together using a technique called
or near synchronous, multiplexing. (Click links such as
plesiochronous to see further information in the right pane).
The nominal bit rates of the different tributaries are independent of each
other and of the resultant 140Mbit/s signal - avoiding the need to have to
synchronise the data-rates of each of the multiple sources.
signals are multiplexed into a fixed output frame containing 378 6-bit
words. Each frame is around 33μs long. Each of the multiplex tributaries has
a fixed allocation of words (see Table 1). This gives a considerable amount
of flexibility including the ability to over-write the entire frame or, by
over-writing the relevant words, any single tributary within the frame.
The hardware implementation was also based on a parallel
6-bit word format that had the advantage over serial operation of requiring
a lower clock frequency and a significant reduction in design difficulties
related to signal timing. High-speed devices were only required for the
final serial-to-parallel conversion required to generate the 68Mbit/s serial
Principles used for coding the
Two main techniques were used to
reduce the bit-rate required to carry the TV picture: Differential
Pulse-Code Modulation and sub-Nyquist
sampling. We shall return to these below.
A third technique was also tried but discarded. This
involved coding only the active period of the signal that carried the
picture information, and removing the synchronizing information that makes
up almost 20% of the analogue signal. A new set of synchronising information
needs to be re-introduced at the output of the decoder. This proved
impractical to implement given the range of analogue input signals (often
far from theoretically perfect) that could be encountered in practice.
The theoretical requirement
In order to digitize an analogue TV signal without
substantially affecting its quality then any reasonably comprehensive
digital textbook will tell you that the signal must be sampled at more than
twice the highest frequency it contains and that those samples need to be
converted to a digital code-word of 8 to 10 bits to ensure reasonable
fidelity of the conversion process. This generally results in a bit-rate of
around 140Mbit/s for a single PAL-coded TV signal - about twice the bit-rate
we needed to achieve in practice!
Differential Pulse-Code Modulation (DPCM)
The first technique used to
reduce the bit-rate was Differential Pulse-Code Modulation. The principle
used is to transmit differences between the picture sample and a prediction
of that sample, rather than transmit the sample value itself. If good
predictions can be made of the expected value, then the differences will be
small and the data-rate required to transmit these differences can be
In the algorithm used in the 68PAL equipment the digital words used to
represent the differences are 6-bits (1 sign bit used to indicate positive
or negative differences together with a 5-bit magnitude). Difference
magnitudes are coded non-linearly so that small differences (which occur
most frequently) are accurately represented. Large differences are not so
accurately represented, and thus the re-constructed signal will be
distorted, but since these large differences are usually associated with
unpredictable pictures (picture ‘cuts’ or lots of movement) then the
distortions are subjectively less important.
The predictor would ideally be based on previous sample
values but since these are not available at the decoder (these are what
we’re trying to avoid sending!) then the re-constructed sample values (based
on previously transmitted differences) must be used instead. Identical
predictors are used at the sending and receiving ends that take as their
input the decoded signal (implying that the coder must also include a local
decoder to drive the predictor – see Figure 2 which provides an outline
diagram of the operation of the DPCM coder). Note that the DPCM decoding
equipment is a duplication of the adder and predictor shown in the coder
prediction algorithm uses the two immediately preceding samples from the
same television line as the current sample, three samples from an adjacent
area of the picture in the previous line and three samples, again in an
adjacent picture area, from the previous field. The choice of the optimum
number of prediction terms, and their various weighting factors, was based
on a statistical analysis carried out on representative picture data.
Although it proved an effective method for reducing the
bit-rate, the use of this technique did not come without its consequences -
The sampling rate used was twice the colour sub-carrier
frequency of the PAL signal (8.86MHz or 2fsc). This is
insufficient to ensure that the sampling does not introduce distortion - a
frequency of at least 11MHz is required. The result of this was that
distortion components were present (these are known as alias components) in
the frequency range 3.37MHz to 5.5MHz. Because of the nature of the TV
signals the energy in the signal tends to be clustered at multiples of the
TV line frequency (15.625kHz). By arranging for the alias components (which
will have a similar structure) to sit in between these wanted components, a
comb-filter (a filter with peaks and troughs every 16kHz above 3.37MHz) can
be used to minimize the effect of aliases whilst also minimizing the impact
on the wanted information. See Figure 3 for a more pictorial description of
In practice, the best results were obtained by initially
sampling the signal at four times the sub-carrier frequency (4fsc,
which is well above the minimum alias-free frequency), applying
comb-filtering to the 4fsc samples to remove components at
frequencies which would otherwise form aliases at exact multiples of line
frequency, forming the sub-Nyquist sample values (which in practise amounted
to no more than throwing away every other sample!) and finally
comb-filtering the 2fsc samples to remove the aliasing introduced
by removing the unwanted samples.
Dealing with frame interval signalling
The description above outlines the handling of the
picture information. A different mode of operation is required for
information carried within the TV signal during the field synchronization
information – the so-called Field Interval Signalling (FIS). During the
non-displayed field period, data and test signals are inserted onto the
blank TV lines. These do not have the same degree of predictability as the
pictures and do not lend themselves to the techniques used to reduce the
bit-rate of the video signal! Of these, the incoming Insertion Test Signals
(ITS) were blanked and reinserted after decoding using a standard commercial
ITS inserter. For other FIS information (such as Teletext) the approach
taken was to re-quantise the incoming 4fsc data from 10-bits down
to 3-bits per sample (in the FIS coder shown at the top of Fig. 2). 3-bit
quantizing is adequate for the data information as the data itself only
represents digital (0 or 1) values.
Pairs of the 4fsc 3-bit samples are collected
into 6-bit words at a 2fsc word rate and passed to the output
data switch for transmission (see Figure 2). At the same time, the other
pole of the data switch (to the left of the quantiser block) feeds dummy
data into the predictor to prevent misbehaviour.
At the decoder end, during the field interval, the FIS
3-bit samples are converted back to 10-bit values and a data selector
bypasses the DPCM decoder and passes the samples directly to the output
Digital to Analogue Converter to re-generate a close approximation of the
original Teletext signals. As in the coder, dummy data is fed into the
decoder predictor during this period to keep the predictors at both ends of
the system in the same state at the end of the FIS period and at the
resumption of picture transmission.
The means for signalling to the decoder that the incoming
samples are for FIS rather than video relies on the fact that only 63 of the
64 possible 6 bit codes are used for DPCM video data. The unused code word
is used exclusively to signal that a block of FIS data follows. This signal
together with validity bits is used to operate the data switches between the
video and FIS modes of operation. The length of the block of FIS data is
pre-determined, so there is no need to signal the end of the block.
The TV sound was carried using
the, by then well-proven, BBC standard two-channel NICAM transmission
equipment which multiplexed the two audio channels into a bit-rate of
The other tributaries
An 8448kbit/s tributary could be used to carry a flexible
mixture of audio and data signals using existing commercial or the BBC’s own
NICAM-based audio distribution systems.
4096kbit/s was allocated for error protection although
this could also have been used for additional audio or data channels in
situations where the benefits of error protection were not required.
A Reed-Solomon error correcting code was used. These
codes work by treating the digital bit-stream as multi-bit symbols and, in
our case, each 6-bit digital words became a symbol of the code. The
code works by adding a number of check (or parity) symbols to the
transmitted signal allowing errored symbols to be recognised and corrected
in the decoder. The analysis and correction is carried out on the symbols
rather than individual bits. These codes are very powerful and particularly
good at dealing with errors that occur in bursts.
They are now commonplace in CD-players and the like. However, this
commonplace use has relied on the introduction of large-scale integration
techniques that were not available to the project. In those days, the
implementation of the codes was quite a challenge – involving a significant
number of integrated circuits to implement (45 at the coder and 130 in the
decoder) and a very clever engineer who was up to the somewhat esoteric
mathematics involved in the correction algorithm!
The error correction system proved vital to the field
trial – not only because the performance of the new link was unknown – but
also to allow its error statistics to be compiled in both the short and the
long term. A further microprocessor-based unit in the de-multiplexer not
only gave an LED bar-graph indication of the current error rate, but could
also be linked to an external data logger to record long-term information.
On a spare 68Mbits/s channel from Birmingham to London
(the return paths were not used during the pilot) the error statistics were
permanently monitored to provide a picture of the nature of errors on the
Click to see
more details of the code.
Diagnostics, maintenance, and
As a diagnostic aid, an integral test signal generator
was included. The test signal occupied 32 picture lines and comprised a
pedestal and ramp. The various feedback loops could be broken to provide
open-loop operation at the flick of a switch. This is essential for fault
finding because in closed-loop circuits it was nigh on impossible to
determine what was going on when everything went haywire. The test signal
also facilitated digital signature analysis. For readers unfamiliar with
this technique, it was a Hewlett-Packard invention. It consists of a pseudo
random counter whose clock is derived from the system under test. This
counter is started and stopped by chosen events in the system under test
using probes that connect to test points. The displayed alphanumeric count
is unique (almost) to the part of the system under test. Once these displays
have been noted for a working system they can be used to verify quickly the
correct operation of further cards built at a later date, or to identify a
Each system also contained a microprocessor-based
monitoring card. This card ran the same algorithms as the equipment in which
it sat – but could only do so at a relatively slow rate. Sampling points
were introduced at key system points that took “snapshots” of the data being
processed. The microprocessor collected these snapshots and verified them
against its own model of the video coding algorithms. The same idea was used
later in Mk2 6 channel NICAM but seems to have fallen out of favour now that
microprocessors are able to operate at the speed necessary to do all the
high-speed processing themselves! Monitoring information captured at the
sending end (such as the condition of the signals at the various input
ports) was sent to the receive terminal by using the slots allocated to
justification words (see Box - Plesiochronous multiplexing) when they were
not needed to carry real data, and so did not entail any overhead in bit
The construction was modular and based on ‘4U’ cards
housed in standard width crates. Each terminal bay (transmit or receive) was
1.25m high and incorporated fan-assisted cooling since the power consumption
approached 500W - See Figure 4.
Figure 4 –
Rhys Lewis with the 68PAL coder (on the left) and decoder
Figure 5 – A
Solderwrap card (wiring side)
Most of the cards used an automated wiring system known
as “Solder wrap” (not heard of before or since!) – see Figure 5. It is very
much like the more familiar Wire Wrap (used extensively at that time for
prototype work) except that the wiring was done by machine, and the joints
were soldered as well as wrapped.
Standard card layouts were used which were pre-drilled to
accept the integrated circuits. A special wire was used, which was around
36swg, self-fluxing, and carried in bundles supported by special plastic
The typical problems encountered during manufacture
included: shorts from solder splashes, wiring errors (due to human error on
the original drawings) and reversed devices since these had to be manually
inserted before the solder wrapping began. Given a certain amount of
dexterity, a pair of fine tweezers, 20-20 eyesight and considerable
patience, it was possible to repair or modify Solder Wrap cards but it was
not uncommon for such intervention to add to the toll of broken wires!
This build form was not suitable for all types of cards
and so some, which used a lot of high-speed logic or which were critically
dependent on timing delays, used early multi-layer printed circuit boards
(PCBs). These were only just becoming possible through the use of Computer
Aided Design (CAD) techniques for generating PCB artwork (rather than the
sticky black tape and large sheets of paper that were in common use before
then!) and more sophisticated manufacturing processes for the PCBs
themselves. Three-layer PCBs were used with the middle layer used as a
ground plane. This allowed the tracks on the outer layers to be treated as
transmission lines with well-defined transmission properties. High-speed
inter-crate connections within a bay relied on the use of balanced signals
transmitted via twisted-pair ribbon cables.
As intimated above, the picture quality was good.
Subjective tests were conducted using the CCIR 5-point quality grading
method. The results are shown in Table 2.
These results show that the video coding scheme did not
introduce any significant impairment of the original PAL signal. Tests also
showed that no change in impairment could be observed between pictures
processed through one codec, or through three codecs in cascade.
The impact of errors
With error correction disabled, the subjective effect of
random errors was judged to be “perceptible but not annoying” at a BER of
about 1x10– 6. When the error correction was enabled, the effects
of errors remained “imperceptible” for a BER up to about 1x10 –4.
A similar performance was also obtained for the sound channels.
An unexpected effect
One of the other practical
problems encountered was that of the noise immunity of some of the
circuitry. Impulsive interference was the cause of this with noise induced
from transient events
causing mis-operation of the circuitry. In particular, some of the
high-speed logic parts used suffered from a relatively high sensitivity to
this effect with particular problems for “clock” signals - since these
provided a common synchronising signal for a large proportion of the
The effect of such impulses was the appearance of the
“tadpoles” described earlier, gently swimming about the picture, though
usually disappearing after a few seconds. Quite calming and therapeutic
really, as long as you didn’t have to fix it! Investigation revealed that
the effect was also present on analogue distribution equipment in the
laboratory but that the ‘spike’ of impulsive noise cause was not very
visible (a single pixel of white or black). In the digital equipment the
processing amplified the effect of a single error and made the impact
considerably more visible. We minimised the impact of the effect,
though never removed it entirely, by careful modification including the use
of balanced clock signals.
After many hours of tests and de-bugging, the first full
system demonstration took place at Kingswood Warren in December 1982. By the
following April we had several working crates for testing. In June 1983 a
note in David’s laboratory book reads “today we sent ‘68’ to the [British
Telecom] tower and back – all worked fine”. The system entered full-scale
operational use between London and Birmingham in early January 1984.
The 68PAL equipment was first demonstrated to the world
at large on the BBC’s stand at the International Broadcasting Convention in
Brighton in September 1984. It attracted much interest, in particular for a
comparator that we’d built to allow people to see where the artefacts
occurred! The comparator isolated the codec artifacts so that they could be
viewed separately. The method used was to subtract the decoder’s output
signal from the original input signal to the coder – leaving just the
differences. This could be done with very high precision in the digital
domain. The visitor was presented with three push buttons situated beneath
the monitor screen. The first of these routed the original input signal to
the monitor. The second routed the comparator output (showing just the
coding artefacts) to the monitor. The third routed the decoder output to the
monitor. The ability to view the artefacts alone, “told” the viewer what to
look for in the way of impairments in the decoded picture as it would be
seen in the home. Without such sensitisation, the impairments were in fact
quite difficult to spot!
The sort of things which were just noticeable were a
degree of edge “busyness” on the castellation round the edges of the
standard analogue test card (Test Card F), and perhaps an occasional
“twinkle” at the edge of a bright, fast-moving object such as a skier. The
worst effect was a “twittering” observable on horizontal boundaries between
saturated colours. This was not seen often on normal pictures but was most
evident on the Teletext pages that were transmitted as the normal vision
signal on BBC2 during the day (remember, this was before the advent of
…and in the end!
After many months of successful operation the pilot came
to an end. Although the technical performance of the equipment was not in
any doubt, the economic case could not be made for a move to digital
distribution at that time. Since then of course the advent of more and
faster processing in a chip has led to the development of much cleverer
algorithms for reducing the bit-rates required for video and audio
transmission. It would now be uncommon to discover an analogue link in use
in the distribution infrastructure – but, perhaps, that’s the subject of
A considerable number of people were involved in the
inception and implementation of the 68PAL affair. The bulk of the work in
Designs Department was done by Rhys Lewis, David Birt and John Robinson with
able support from John Weston and many other colleagues – whether
engineering, management or support staff.
It would also be unfair not to mention our colleagues in
the BBC’s Research Department – 68PAL was after all their baby! Of
particular note are the contributions of
Nick Wells, Jonathan Stott and
Keith Slavin (Keith was the engineer who designed the error-correction
Our apologies go to those whose names we’ve managed to
miss through ignorance or forgetfulness.