Erroneous Harmonics

I am writing a paper, and was wondering if you could provide a clarification for me. A digital signal, eg Fibre Channel at 1.06 Gbps, has a spectrum with a fundamental frequency Fo= 531 MHz, owing to the 101010... transition sequence. The bit rate is 942 psec per bit, with a maximum edge rate on the optical signal of 320 psec according to the 100-SM- LC-L draft spec.

In my paper I claim that "the Tx electronics, and the link itself, must be designed toward the 5th harmonic of Fo", which is roughly 2.6 GHz. This statement is included from a signal-integrity point of view to indicate how be-devilling the laser package parasitics, laser drive circuit "tweakiness", non-lumped impedances, laser expense, and EMC would be.

Is designing to the 3rd harmonic not sufficient? The 7th is overkill? What amplitude and phase deviations from the ideal transfer function would be considered "good"? Is it a fair statement that the 5th harmonic is required from a design (and laser expense) point of view?

The reason for my question is I am trying to define the amount of "bandwidth" used by different modulation schemes. We are developing a data-coding technique that has some compression in the frequency domain, and whose spectral content is bounded at the high end by 1.5 to 2 Fo. I want to make a fair comparison.

I would also like to use a citation regarding the 5th (or 3rd) harmonic claim. I think your book with Dr. Graham would be a good citation, but I want to double check with you to make sure that I am referencing it correctly.

Thanks,
Bob

Thanks for your interest in High-Speed Digital Design.

You won't find a quote in my book about "harmonics" because that isn't a good way to look at the problem. From a theoretical point of view, only strictly repetitive signals have harmonics. Random digital signals aren't necessarily repetitive, so they have *NO HARMONICS*.

What you need is an idea of the power spectral density of a random digital signal, and for that you need to directly address the question of the rise/fall time.

At the serial electrical interface between a gigabit serializer and an optical converter, the rise/fall time is on the order of 20 percent of the bit interval, or about tr=200 ps.

Let's consider how the frequency response of your interconnection could distort such a signal.

  • A 1-dB disturbance in the frequency response of your interconnect at the maximal alternation frequency of the data (531 MHz) would induce a 1- dB effect on the amplitude at the center of the received eye pattern.
  • A 1-dB disturbance in the frequency response of your interconnect at the knee frequency (0.5/tr = 2.5 GHz) would induce a 1-dB effect on the shape of the rising edge.
  • A 1-dB disturbance in the frequency response of your interconnect at twice the knee frequency (1/tr = 5.0 GHz) would have about a 0.1-dB effect on the shape of the rising edge (the spectral power density of a digital signal falls off very rapidly above the knee frequency).

These effects are easily simulated.

Best regards,
Dr. Howard Johnson

I see my numbers are roughly right, but for the wrong reason :-) ...

...You quote edge rates at 20 percent of the bit interval, but is there a reference for this? I looked in the ANSI 10-bit serializer interface standard to see if it lists the edge rate requirements for the PECL serializer output. It doesn't. The GBIC standard and Vitesse data sheets doesn't spec this either; HP's fibre channel serdes gives 375 Psec maximum. Unless I can find a good source for this, I might have to quote you, with permission.

The GE optical standard specifies 320 psec edge rates (measured at the 20-80% points), compared to 800 psec for the bit time. This works out to be a 40 percent ratio, and Fibre Channel is roughly 50 percent. This is the optical edge rate, and includes the finite response time of the E/O converter. However, it is the edge rate exiting at the serializer that is the "tall pole in the tent" as far as system design difficulty is concerned. Please confirm.

-Bob

My recollection from the development of the Gigabit Ethernet standard is that we heard specifications for the rise time on the serializer-to-optics link ranging from 150 ps to 400 ps. There was a general consensus that 400 ps was too slow, and that such a slow rise/fall time would adversely affect the jitter budget (i.e., there would not be enough eye opening left after accounting for all other jitter effects).

Twenty percent of 800 ps would be 160 ps, which is at the lower end of the range we heard about. I'd say 20 percent is a good *minimum* number representative of Gigabit Ethernet, and representative of other systems I've worked on in the past as well (FDDI & Fast Ethernet at 100 MHz, Ultra Network Technologies 312 Mhz, etc.). For your bandwidth calculations it's the *minimum* risetime, not the maximum, that matters most.

By the way, the reason people don't use edge rates much faster than 20 percent of the bit time for bleeding- edge, high-speed serial links is because if your signals that looked that good, your marketing people would make you turn up the clock until the edges *did* consume at least 20 percent of the bit time.

The reason people don't go with edges much slower than 40 percent of the bit time is because at that point the slow edges begin taking a huge a bite out of the timing and jitter budget.

Twenty to forty percent is the usual range.

Best regards,
Dr. Howard Johnson