## Clock Jitter Propagation

Here's a high-performance racing game. Imagine you are drafting at 100 mph, just inches behind the next driver on a long, straight section of interstate highway. It's your job to follow (track) the movements of the other vehicle as precisely as possible. The other driver is turning his wheel this way and that, trying to throw you off his tail.

If your opponent gradually moves his wheel, you have no difficulty tracking his movements. You see and respond to the graceful movements of his vehicle and have no difficulty following where he's going. This scenario is your tracking behavior.

If your opponent grabs his wheel and violently shakes it, without changing the overall average direction of his vehicle, it makes almost no difference to your strategy. His car may vibrate terribly, but as long as you follow his average direction, you're still probably close enough to draft effectively. This scenario is your filtering behavior. You don't even try to duplicate the shaking motion; you just filter it out.

You can make a chart that shows the frequency response of your steering operations Figure 1. In charting this situation, your opponent first begins moving his vehicle back and forth across the road in a slow, undulating motion: y1(t)=a1sin(ωt). Record the frequency, ω, of his undulations; the amplitude, a1, of his undulations; and the amplitude, a2, of your response. As your opponent slowly increases his rate of undulation from slow to very rapid, make a chart showing the system gain, a2/a1, versus frequency.

At frequencies within your tracking range, the amplitudes should match perfectly, so the gain is flat (unity gain) in this area. At frequencies within your filtering range, the gain should descend rapidly to zero, because in that area you don't respond. The interesting part happens at the boundary between these two ranges. Most drivers, as the lead car's undulations approach some critical rate, develop acute difficulties. Their response may significantly lag the motions of the lead car, and, in their anxious attempts to make up for this delay, they overshoot the mark at the apogee of each excursion. As a result, their frequency-response chart exhibits a gain greater than unity at some frequencies. If the overshoot is severe, it appears as a large resonant peak in the frequency-response diagram. A system that lacks any resonant peak is well-damped.

A mild resonance at the tracking boundary can, in some cases, help minimize the average tracking error. The practice of causing a mild resonance at the crossover frequency is PLL peaking. A peaking feature would be a good thing if yours is the only car in the experiment, but any sort of resonance, even a tiny one, spells disaster for a highly cascaded system.

For example, imagine a long chain of N cars drafting each other on the highway. Suppose the first car commences gyrations having a peak-to-peak amplitude of 1 cm precisely at the resonant frequency. If the overshoot of each car at resonance amounts to 10, (a gain of 1.1 at resonance), the gyrational amplitude of car number 2 is 1.1, car number three is 1.21, and so on, until at car N the gyrational amplitude is 1.1N. With 50 cars down the line, the peak-to-peak amplitude works out to 117 cm (if they don't careen off the road).

--------------------------

POSTLOG: IBM ran into a difficulty with peaking during the development of the ill-fated "IBM token-ring" local area network standard. This standard was designed to comptete with Ethernet in the early days of computer networking. The original design of token-ring specified rings of up to 16 devices connected in series, with bridges connecting the rings into larger structures. As the standard developed, the token-bus supporters discovered that Ethernet could support support 256 devices attached to a single length of coaxial cable with no bridges. The lack of need for bridges was a significant cost advantage. In reaction to that discovery, the token-bus standard was quickly changed to specify support for up to 512 devices in a single ring. It sounds like a simple change, after all, if the devices are just connected together with cables then why not form a great, big ring? Months later, while the Ethernet camp was still struggling to make a cost-effective physical implementation of their design, Texas Instruments delivered the first "single-chip" integrated solution for a token-ring controller. They, along with IBM, the other big supporter, assumed this would deal a death blow to Ethernet. They were wrong. The TI engineers had incorporated a small degree of peaking into the PLL clock recovery circuit in each chip. The purpose of peaking was to quicken the lock-in time under certain circumstances. The peaking was only about ½ dB, but when they assembled 512 devices in series to check the maximal configuration, the system exhibited a peak in the frequency response of the whole PLL chain with a gain of approximately 256 dB at the peaking frequency. Every time they booted up the system, tiny amounts of thermal noise (jitter) present at the critical frequency drove the entire PLL chain into horrible undulations. There was no fix but to re-spin the chip, which took about a year. In the intervening time, Ethernet won the LAN wars. That's why your computer has Ethernet on it today.