Jitter and SNR Combined

The first nip of cold weather has arrived at our house. My wife and I have started harvesting firewood from our Aspen groves. Last week we sawed up a few big trunks for our neighbors, who have gotten too old to chop their own. Old timers in this area tell us that chopping wood "warms you twice—once in the cutting and once in the final enjoyment."

Jitter and SNR Combined 

Doron Levy of Motorola writes regarding high-speed serial links:

Do you have any idea how to convert Jitter to SNR?

Thanks for your interest in High-Speed Digital Design.

Regarding your inquiry, Doron, you have asked a complicated question. I may not be able to provide a complete answer in this brief email, but I will endeavor to point you in the right direction.

First, a definition appropriate for serial digital connections:

SNR (in this discussion) is the ratio of the sampling error to the ideal signal size, where sampling error is defined as the difference between the actual sampled value and the ideal value expected at the moment of clocking.

Let's do two examples. First, imagine you have a perfectly square-edged signal, with flat, noiseless logic levels of exactly one and zero. If perfectly sampled in the center of each bit cell this signal is totally noiseless, with an infinite SNR. Now add random jitter to each transition. If you add an amount of jitter that NEVER EXCEEDS say, one-fourth of the bit interval, and further presuming that this jitter does not unduly affect recovery of the sampling clock, then the data sampled during each bit cell should still exhibit zero noise (infinite SNR). This example shows that sometimes, depending on the shape of the signal and so forth, jitter has NO IMPACT on SNR.

In the second example, jitter will have a distinct impact on SNR. Let the signal now have finite rise and fall times, with durations comparable to one bit interval. This arrangement creates a classic eye pattern diagram, with a lot of fuzz (intersymbol interference) at the top and bottom of the eye. In this case, if you add jitter to one particular bit cell boundary, and assuming that action displaces horizontally the entire form of that particular rising or falling edge, then the vertical displacement of the waveform at the sampling location will be related to the slope (dV/dT) of the received waveform at the time of sampling and also the amplitude of the jitter (dT).

Because the maximum slopes in the eye pattern waveform occur off-center, near the edges of the eye pattern, those locations harbor the greatest relation between jitter and SNR. Examining then at the edge of the eye pattern, corresponding to the maximum degree of misalignment you expect in your recovered clock, it is tempting to power-sum the noise due to ISI, jitter, and other sources to arrive at some overall SNR figure that represents the true operational margin of your system. I should caution you that this approach is rarely fruitful. One reason power summing does not work is because the slope of the received waveform is not constant. Therefore, one can not extrapolate from the small amounts of jitter that commonly occur the probability of rare, large noise events unless you can specify the complete joint probability of occurrence of all factors in the equations, and the shape of the received waveform.

I don't know about you, but I would rather not consider of the joint probability of occurrence of vertical noise and horizontal jitter in the same equation. The usual approach to estimating system performance separates these two terms, ensuring that each term individually remains sufficiently small to guarantee reliable operation when added together. The result is a "conservative" estimate of performance (but it isn't conservative by very much, so I think it is the best way to go). The procedure works like this:

  1. Generate an engineering budget that establishes some limitation on the worst-case peak value of expected jitter, taken over an internal of some large number of bits (perhaps ten-to-the-fifteenth).
  2. Estimate how far your recovered clock is likely to wander, based on the various offset voltages present in the phase detector, etc.,
  3. Third, add the worst-case peak jitter and the recovered clock offset together. This number tells you how far "apparent" clock might stray from the ideal location on any individual data cell. Call this number the worst-case clock offset.
  4. Based on the simulated shape of your received waveform, determine the degradation in received signal amplitude at worst-case clock offset.
  5. Subtract from your degraded signal amplitude the amplitude of all other noise sources (on a ten-to-the-fifteenth BER basis) and demonstrate that the resulting signal still exceeds the minimum receiver sensitivity.

This basic procedure has been used in the development of many popular LAN standards. It has stood up to scrutiny by literally hundreds of engineers, and I believe it will serve you quite well.

As a final note regarding the estimation of worst-case jitter, I should like to point you to this article:

Random and Deterministic Jitter June 27, 2002, EDN Magazine

Best Regards,
Dr. Howard Johnson