## DC Blocking Capacitor Value

This is not the last newsletter for 2004, despite the fact that, and I consider this only a technicality, my wife's calendar currently says 2005. Mine still says 2004.

Yes, I'm a little behind schedule, but I'll be catching up soon. In fact, you may be interested to know that the subsequent issue #10 of this series has Already Been Published, even though you may not have received it (unless you work for HP).

What's happening is simple to understand, but a little difficult to explain.

Ten years ago, when I first began teaching seminars, two of my earliest clients included Sandia National Laboratories and Hewlett Packard. Both have since continued a regular program of educational interactions. This year, commemorating my tenth anniversary of teaching, I directed some newsletters to my colleagues at those companies, addressing issues of particular concern to their designs. The titles are "Serial Killers", (vol. 7 #7), and "Scrambled Bus", (vol. 7, #10). The article "Serial Killers" discusses the testing of high-reliability serial links. "Scrambled Bus" details a couple of interesting ways to reduce EMI on exposed ribbon-cable interfaces. Both articles now appear in my article index should they interest you.

Today's missive, #9, is part II of the previous publicly circulated letter, #8, dealing with DC blocking capacitors on serial links.

Ok! Now I'm ready to turn the page on 2004 and begin preparing for DesignCon in Santa Clara, where you will find me in the Xilinx booth on January 31st talking about my latest movie "Introduction to RocketIO", and any other subjects you care to bring up. Stop by and say hello.

2005 promises to be another event-filled year. Check out the end of this newsletter for a listing of my public appearances.

## DC Blocking Capacitor Value

Jaime Melanson, of Dell Enterprise Server Design writes:

How do I choose the value for a DC blocking capacitor in a serial link application?

Jaime, you raise a very important question. Especially now that many organizations are implementing smaller and smaller capacitor body sizes, it just is not possible to always use something big enough (like 0.1 uF) to completely wipe out the DC wander problem.

I will try to provide a quantitative answer to your question.

### Background

I shall begin with a quote from my newsletter vol 4, #15, "When to use AC coupling":

To estimate the degree of DC wander possible when passing a particular code through a certain high pass filter HPF(f), first set up a complimentary filter LPF(f), defined thus:

LPF(f) = 1 - HPF(f)

Then pass the data code through the filter LPF(f) and look for the worst-case output. The magnitude of the output of LPF(f) equals the magnitude of the worst-case DC-wander error you will experience when passing your signal through HPF(f).

If that article is not familiar to you, take a moment now to look it over, as the remainder of this text builds on that theme: ../news/4_15.htm

### Filter Theory (Review)

The remainder of this article requires that you know the relaxation time constant associated with a high-pass R-C filter.

If your transmission setup is terminated at both ends with
impedance* Z*_{0}, as is customary with
very high-speed links, then the total resistance in series with your DC blocking
capacitor equals twice the line impedance, or 2*Z*_{0}. A DC-blocking capacitor *C* placed in series with your serial link creates a simple high-pass filter
[HPF] with a time constant:

That statement assumes your line is terminated at both ends.

If, on the other hand, your source happens to be a low-impedance driver
and the line terminated only at its far end with impedance *Z*_{0} then the time constant
becomes a different value:

In either case the related complementary filter [1-HPF(f)] has the same time constant as HPF(f).

I shall assume in the following analysis that your transmission line is terminated at both ends. If that is not the case, you must modify all the equations below.

## Approach

I propose that we develop an expression for the maximal size of the output from the complementary low-pass filter LPF(f). That expression relates the maximum amount DC wander to the time constant,τ and thus to the value of capacitance.

If you know how much wander your system can tolerate, as determined from analysis of your eye margin budget, you can then calculate the capacitance required to achieve that goal.

To read along with the following analysis you need to know how a one-pole LPF reacts to one individual bit.

A single bit of duration T when presented to the input to the LPF causes the LPF output to rise in a linear fashion during the bit interval, falling slowly with time constant τ, thereafter back to zero. This approximation assumes that τ vastly exceeds T, a condition consistent with the idea of a DC-blocking application.

If you transmit N similar bits in a row, it is a good bet that the LPF filter output will pump up to a value of NT/τ.

How many bits in a row might you ever see? That is a very important question to ask about your data code; answers vary widely depending on who designed your code and whether they considered DC balance.

I am going to define a term now, called running-disparity, or RD, that will help you understand how data codes are built. Every sequence of code bits x[n] implies a corresponding sequence RD[n], where:

RD[n] equals the sum of all bits up to and including bit x[n]

It is helpful in constructing these arguments if you think of a binary data sequence as having values +1 and -1 (or, more generally, +A and -A). For a DC-balanced sequence, the RD never strays far from zero.

In fact, one excellent way to specify the degree of DC balance in a data code is to call out the maximum excursion of RD.

Those of you well versed in calculus may be thinking that RD looks like an integrating operation. Precisely.

Here is a basic theorem about RD.

IF your data code guarantees |RD|<n

THEN the DC wander signal *z*(*t*) is bounded by:

**Where:**

*n*is the bound on RD, in numbers of baud intervals,*A*is the binary signal amplitude (+/-*A*),*T*is the coded bit interval, and- <Greek-tau> is the HPF filter time constant.

This RD theorem assumes that the filter response is a single-pole filter with a monotonic step response (no zeros).

With the RD theorem and the relation
τ=2*Z*_{0}*C*
in hand, a specification *M* for the maximum permissible
amplitude of DC wander then determines the required (minimum) value of
capacitance *C*:

**Where:**

*n*is the bound on RD, in numbers of baud intervals,*A*is the binary signal amplitude, in volts, assuming the signal swings from -*A*to +*A,**T*is the coded bit interval,*M*is the maximum permissible amplitude of DC wander, in volts, and*Z*_{0}is the transmission line impedance (assumes both-ends termination).

Any value of capacitance larger than this amount will work.

The following sections review four popular
data codes showing the values of *n* appropriate for each.

### Manchester coding

Manchester codes map each data bit into a two-bit code word sequence. The code words are then transmitted at a rate twice as fast as the data bits.

There are several forms of Manchester coding. In the most basic type a data one maps to the pattern 01 and a data zero maps to the pattern 10. Note that at least one transition is guaranteed in the CENTER of each two-bit pattern. Other flavors exist, including a variety that transmits a guaranteed transition at the BEGINNING of each bit, and then either another transition or no transition in the middle position. The DC balance considerations for all these flavors work the same way.

Figure 2 illustrates the RD behavior of a Manchester code. Given a transmitted (code-word) data pattern of 010101101010, calculate the sequence of RD values starting with RD=0. Each "0" causes RD to go down by one, and each "1" raises it. If you think of the RD sequence as an integrated version of the transmitted data, the sequence of RD values looks like this.

The Manchester code has a few complications involving the use of Control Codes (like start-of-packet, end-of-packet, etc.). These codes often appear recognizably different from any possible data pattern, in which case the Control Codes likely INCREASE the maximum value of RD. For example, any "code-violation" scheme that omits transitions where they are otherwise guaranteed produces (somewhere) RD=±2.

You are responsible for checking all your control codes to locate worst-case RD excursions.

It is possible using variable bit-density coding or layered coding schemes to architect control code systems that do not increase the worst-case RD, but that discussion lies beyond the scope of today's article.

Accepting for the moment a simple bound of |RD|<2, our RD theorem establishes a limit on the amplitude of the DC wander signal z(t):

Based on the idea that the RD has additional limits placed on its behavior you can in some code instances derive a slightly tighter bound on |z|.

This improvement in the bound of |z| results from the fact that the RD waveform that results from code violations does not remain long at its peak excursions of +/-2, returning always in the next bit towards zero. This restriction prevents the occurrence of patterns that cause maximal excursions, resulting in a slightly lower limit of:

### 8B10B coding

The 8B10B code, popularized by ANSI Fibre Channel and also Gigabit Ethernet 1000BASE-CX (dual shielded twisted pairs), XAUI, and a host of other serial links, enforces a strict limit of |RD|<3. This limit applies to all data and control words, so we may confidently conclude:

As with Manchester coding, a minor improvement in this bound results from considering how the RD waveform is prohibited from remaining long at its peak excursions of +/-3, returning always in the next bit toward zero. This restriction prevents the occurrence of patterns that cause maximal excursions, resulting in a slightly smaller limit of

If you want to see how I derived the factor 4.9, here are my notes: 7_09_addenda.pdf

##### 4B5B coding

The 4B5B code as used in FDDI and 100BaseFx (Fiber Optic Fast Ethernet) permits the transmission of an unlimited number of successive 5-bit code words like 01010 that, being inherently unbalanced, cause RD to grow without bounds. That growth complicates the analysis. To see what happens, we must decompose each 5B code word into two pieces:

- A DC offset having amplitude of either +/-0.2 persisting for the entire 5B interval,
- Plus the residual signal after subtracting the DC offset

To simulate an entire data stream, pass the sequence of DC offset signals through a first LPF and separately pass the remaining portions of the signal through a second LPF. Sum the two LPF outputs to determine the complete value of z(t).

If you run this simulation you will find the output of the first LPF clearly can never exceed 0.2 (assuming the LPF is well damped).

The output of the second LPF has now been
reduced to a case having no DC input (i.e., RD=0 at the end of each 5B word),
much like the previous cases, but with a modification to the bit amplitudes used
to determine the RD. For example, in the case of the code word 0010, the 4B5B
table lists a value of 10100, for which the transmission coder interprets each
"1" as a change of sign (leftmost bit transmitted first). The output stream thus
appears as "+1+1-1-1-1" or

"-1-1+1+1+1", depending the beginning state of the transmitter. Subtracting the
DC offsets of -0.2 or +0.2, respectively, produces streams of DC-balanced code
words like this:

+1.2, +1.2, -0.8, -0.8, -0.8

These values become the input to the second LPF in your analysis. The RD associated with such a bit stream proceeds as follows:

+1.2, +2.4, +1.6, +0.8, 0

indicating a peak RD of 2.4. If you check all the data-code cases you will find that |RD|<2.4 always, so the output of the second filter will be limited to a value no worse than +/-(4.8)AT/tau.

Unfortunately, that is not the end of it, because there is a control pattern, called JK, which violates the RD rules used for data patterns. This pattern, when coded on the line, looks like:

...1000011110...

Looking at the complete ten-bit pattern as an entity with net zero DC bias, the worst-case excursion in RD is -3 (or +3, depending on the polarity with which you begin). With the JK control code taken into account the output of the second filter will be limited to a value no worse than +/-6AT/tau.

Adding to that value of RD the worst-case output of the first filter (+/- 0.2A) yields a bound on the DC wander of

### MLT-3 coding

This code was implemented in the category-5 UTP version of FDDI and also the UTP version of Fast Ethernet. The Ethernet version was officially titled "100BASE-TX", but is sometimes called just "100BASE-T"

This system uses the 4B5B code table, but instead of alternating the output bits on every "1" in the code table it uses a different plan called MLT-3.

The MLT-3 plan is a ternary signaling scheme, meaning that at each signaling instant, or baud, the signal assumes one of three levels. A change from one level to the next marks a logical 1. Intervals where the signal remains constant represent logical 0s. The transmitted signal circulates endlessly among the three possible signal values, always in the same order: [+1, 0, -1, 0, +1, 0,...]. This system uses three levels, but at each stage it can only do one of two things, either advance to the next level or stay where it is (see Figure 3). Each sample clock, therefore, conveys exactly one bit of information.

The MLT-3 standard then goes one step further by introducing scrambling. By scrambling the 5B code-word sequence with a long pseudo-random pattern, and unscrambling at the far end with a synchronized pseudo-random pattern, spectral peaks that otherwise occur in highly repetitive data sequences become smoothed into a continuum of white noise. The EMC benefits of scrambling can be considerable (see my newsletter vol 7, #10 "Scrambled Bus")

What this system loses by scrambling the
binary data right *after *the 4B5B coding (instead of scrambling the data
prior to the 4B5B code table) is DC balance. There are no guarantees that the
resulting MLT-3 sequence will not progress to the +1 state and hang there for a
very, very long time. Not only is DC balance lost, but also the minimum
transition density required to maintain PLL lock.

The architects of this code argued that the probability of transmitting data patterns well enough matched to the scrambler to produce DC balance errors (or transition density problems) was completely negligible, and backed up their assertions with statistical calculations. Under the assumption that the data was random, their analysis held.

Unfortunately, and this is one of those gems of wisdom I can pass along to you after a long career of advanced systems development, an ingenious group of engineers at Hewlett Packard, who just happened to be promoting a different, competing standard, calculated some data patterns that were virtually guaranteed to produce MLT-3 patterns that occasionally locked with the pseudo-random generator in such a way that the scrambled output would produce long strings of binary zeros. The MLT-3 circuit then coded these zeros into long runs of steady-state values. The result created obscene gyrations of DC wander with striking regularity. Someone (we may never know who) then placed these test patterns onto disks and mailed them to prospective customers of the Fast Ethernet 100BASE-TX system, suggesting that they attempt to transfer the innocent-looking files whose transmission, of course, crashed the network.

Scrambling prior to the code-table lookup, and designing the code table with MLT in mind might have prevented these difficulties.

As of today, that problem has been largely solved. To obviate concerns about DC wander with MLT-3 many manufacturers have implemented DC restoration circuits in their receivers (see "SONET Data Coding", Newsletter vol 5, #5 ../news/5_5.htm). The DC restoration circuits fix the problems in this system with DC wander.

If there is a moral to my tale, it is this: Customers have no incentive to stress your system, but competitors do.

Related to this story, I heard just yesterday on the si-list a discussion of power integrity for advanced processor designs. It was asserted that practical software would likely never exercise some peculiar "worst case" combinations of features inside a CPU, and so failure of the CPU power delivery system under those conditions would not be a problem.

The failure will indeed not be a problem for the COMPETITORS of that particular product. If they discover the pattern, they will be quite gleeful.

Best Regards,

Dr. Howard Johnson