In the last two posts I have introduced jitter as a digital phenomenon,
explained what it is, what it does, how we can measure it, and
discussed whether or not we can hear it. All this was started off by
the observation that different "Bit Perfect" audio players can sound
different, even though all they are doing is using the same hardware
to send the same precise data to the same DAC. We postulated that, in
the absence of anything else, the only possible mechanism to account for
those difficulties had to be jitter. So we took a long hard look at
jitter to see if it fit the bill. And maybe it does, but we didn't
exactly explain how software can affect jitter. Before doing so, we are
going to take a fresh look at jitter, this time from an analog
perspective.
You see, although it is easy to think of digital
data in binary terms (it's either a one or it's a zero), that we process
at some specific instant in time (defined to the picosecond so we can
account for "jitter") - in reality it is never quite so simple as that.
Let's look at a simple circuit, designed to take a digital input and,
at a point in time defined by a clock signal, output a certain analog
voltage. For the purposes of understanding the operation of jitter,
there are two things happening. First of all, the circuit is monitoring
the voltage which represents the tick-tock of the clock signal, in order to determine
when it is ticking and when it is tocking. And secondly, once the clock
is triggered, the output circuitry is applying the desired voltage to
the output.
We'll start with the tick-tock of the clock. A
clock signal is typically a square-wave waveform. It oscillates sharply
between fully "on" (a high voltage) and fully "off" (a low voltage),
and it does so very abruptly. Our circuit measures a "tick" when it
detects the signal transitioning from low to high, and a "tock" when it
detects a transition from high to low. Tick is generally the point at
which the clock signal triggers an event, and the function of Tock is
generally to mark the point at which the circuit starts looking again
for the next Tick. Digital electronics are very easy for the layman to
comprehend at a functional level, because the concepts are very easy to
visualize. We can easily imagine a square-wave clock signal, and the
nice, clean, rising and falling edges of the Tick and the Tock. At some
point most of us have seen nice, helpful diagrams. What's the problem?
A real-world clock signal has some real-world problems to deal with.
First of all, we need to look at those high and low voltages that we are
measuring. If we look closely enough, we will see that there is
usually a lot of high-frequency noise on them. They are not the nice
clean square waves we expected them to be. The impact of all this noise
is that it gets harder to guarantee that we can distinguish the "high"
voltage from the "low". It is no use getting it right 99 times out of
100. We need to get it right a million times out of a million. The
higher the speed of the clock, the worse this gets. But it is quite
easy to fix. If the problem is caused by high frequency noise, all we
need to do is to filter it out using a Low-Pass Filter. Let’s do that.
Now two things have happened. First, the trace has indeed thinned out,
and we can accurately determine whether the clock voltage is "high" or
"low". But now we see that the transitions between high and low now
occur at a more leisurely pace. The exact point of the transition is no
longer so clear. There is some uncertainty as to precisely when the "Tick"
actually occurs. Because of this filtering, there is a trade-off to be
had between being able to detect IF a transition has occurred, and
exactly WHEN it occurred. If our Low-Pass Filter rolls over at a
frequency just above the clock frequency, we do a great job of filtering
the noise but it gets correspondingly less certain WHEN the transition
occurs - in other words we have higher jitter. The amount of
uncertainty in the WHEN can be approximated as the inverse of the
roll-over frequency of the filter. The roll-over frequency is therefore
normally the highest we can make it without compromising our ability to
be certain about detecting the IF. Therefore, if we need be able to
function with a higher roll-over frequency and so reduce the uncertainty
in the WHEN, we need to re-design the circuit to reduce the noise in
the first place.
The take-away from all this is that the presence of high frequency noise is in itself a jitter-like phenomenon.
One way to design an accurate clock is to run the clock "N" times
faster than we actually need, and rather than count every Tick, we count
every Nth Tick. We call this process "clock multiplication", and we
can - in principle - achieve an arbitrarily low jitter on our clock by
continuing to increase the clock multiplier. This is, in fact, the way
all real-world clocks are built. Any way you do it, though, it gets
exponentially more expensive the faster the clock gets, due to an
increasingly more arduous noise problem. Wickedly so, in fact. If you
are DAC manufacturer, it really is a simple question of how much you
want to pay to reduce your master clock jitter!
And it's not
just the clock itself that has to be high speed. For example, any
circuit anywhere in your DAC that needs to operate such that events can
be synchronized to within 1ns must, by definition, have a bandwidth
exceeding 1GHz. That, dear reader, is one heck of a high bandwidth.
Not only does your clock circuitry have to have GHz bandwidth, so does
the converter circuitry which is communicating with it to synchronize
its measly 44.1kHz operation with your GHz clock. Otherwise - in
principle at least (because nothing is ever so black and white) - you
would be wasting the expense you went to in generating your super-clock
in the first place. In any case, it becomes a given that a real-world
DAC will be constructed with electronics having a bandwidth which - if
not necessarily in the exotic territory of GHz - will still be much
higher than the sample rates of the conversions it is tasked to perform.
Designing a circuit with RF (Radio Frequency) bandwidth, and having it
exhibit good immunity from RF noise, is generally a no-win prospect.
When dealing with RF, every problem you solve here results in a new one
popping up there. RF circuits are inherently sensitive to RF
interference, and so you need to design them - and package them - in such a
way as to make them immune from the effects of external RF.
External RF is everywhere. It has to be, otherwise your cell phone
wouldn’t work. In a circuit, RF does not just flow neatly along wires
from one component to the next. It also radiates from the wires and
from the components, all the time. And it is not just RF circuits that
generate RF. Your house wiring is rife with RF. Just about every
electrical appliance you own - all the way down to the dimmer switches
on your lights - emits RF. My central heating just turned on - sending a
massive RF spike throughout the house, not to mention through the
electrical mains line to my neighbour’s house. As a DAC designer, you
can do your job diligently to protect yourself from all this, and at
least minimize the ability of RF to sneak into your circuit from the
surroundings. But you can’t do much about the RF that sneaks in through
connections that are designed to transmit RF bandwidths in the first
place! Such as the USB and S/PDIF connectors through which your music
data enters the DAC.
A USB connector is by design a high
bandwidth port. The USB spec calls for it to be so. Within its defined
bandwidth, any RF noise injected at one end will be faithfully
transmitted down the cable and out the other end. And, in your case,
straight into your DAC. This will be so, even if you don’t need all
that bandwidth to transmit the music data. The USB connection is indeed
a very noisy environment. This is because, in order to just transmit
raw data, you are fully prepared to sacrifice timing accuracy (the WHEN)
for data accuracy (the IF). Therefore, so long as the amount of
spurious RF interference injected into the link is not so much as to
compromise the data transmission, the intended performance of the USB
link is being fully met. So, if the internals of a computer are
generating a lot of spurious RF, there is good reason to imagine that a
lot of it is going to be transmitted to any device attached to it via a
USB cable. Such as your DAC.
What are the sources of spurious
RF inside a computer? For a start, every last chip, from the CPU down,
is a powerful RF source. And the harder these chips work - the more
cycles they run - the more RF they will emit. Disk drives generate lots
of RF noise, as do displays, ethernet ports, bluetooth and WiFi
adapters.
So it is not too much of a stretch to imagine that
playback software which uses more CPU, more HD access, makes things
happen on the display, and communicates over your networks, … it is not
too much of a stretch to see how those things have the potential to
impact the sound being played on the DAC connected to the computer. Not
through 'classical' digital jitter, but through RF interference, whose effects we now see are all but impossible to distinguish from those of jitter.
This, we
believe, is the area in which BitPerfect does its thing. Through trial
and error, we have established what sort of software activities result
in sonic degradation, and which ones don’t. We have a number of very specific objective measurements that we make when optimizing our playback engine which correlate rather well with our subjective assessments of sound quality. It doesn’t mean we have the
exact mechanisms nailed down, but it does mean that we have at least the
basics of a handle on the cause-and-effect relationships.