High-Speed Digital Design Online Newsletter: Vol. 12 Issue 03
I'll be in Rochester, NY, May 4-7, 2009, teaching two courses: Advanced High-Speed Signal Propagation and High-Speed Noise and Grounding.
These two public courses are sponsored by Avnet. My full schedule of cities, dates and course descriptions appears at: www.sigcon.com.
PREFACE TO TODAY'S ARTICLE
Starting in 1981, my technical mentor, Professor Martin (Marty) Graham, worked with me to create a new distributed bus architecture that quadrupled the performance of a large ROLM (later IBM) digital telephone exchange. Marty revealed the principles behind his new bus structure in a series of meetings, mostly around mealtimes.
If you haven't read my latest article: Space-Time Diagrams, read that first. It contains background material essential to understanding today's topic.
Dr. Graham flagged down a white-jacketed waiter and called for another scotch rocks. It was late. At our regular spot in the back of Alexander's restaurant, surrounded by heavy velvet curtains, the rich smell of Lobster Newberg wafted in from the kitchen. Marty loved sumptuous food. Half of our discussions took place in restaurants.
When his drink came, Marty leaned across the table. He gazed at me with bushy eyebrows raised. In a soft voice, he asked, "When does a driver not drive?" Marty was not thinking about a broken driver, or one that is powered off or tri-stated. He meant a regular, totem-pole output, in the switched-on condition.
This is not a trick question. It has deep implications for the design of distributed bus structures.
* * * * *
Let's conduct a low-speed bus experiment. Connect two totem-pole drivers to the same bus. Terminate the bus at both ends to prevent reflections. Begin with both drivers in the tri-state (OFF) condition, sourcing no current. At time zero, switch one of the drivers high and leave it high for a long time. Eventually, after the transmitted wave propagates to both ends of the line, the driver sources current into the terminating networks at both ends. The total DC current required of the driver equals the logic-high voltage, V[HI], divided by twenty-five ohms (the combination of both terminations in parallel). Presuming the driver is powerful enough to do this job, the voltage at both ends of the line rises to V[HI].
What happens if, in addition to the first driver, you also switch on a second driver, setting it high also? Answer: the voltage at either end of the line hardly changes. If the output voltage drooped a bit in the first case there will be less droop now, but other than that, the voltages at the endpoints remain unchanged.
Apparently, if the bus already exists at a certain voltage state, and you switch on a second driver to that same state, the second driver exerts practically no influence on the bus. The driver doesn't drive. Conversely, if you switch the two drivers to opposing states, the cross-switched configuration draws a disproportionate surge of current through the top half of one totem-pole, across the bus, and then through the bottom half of the other totem pole. That surge of current exacerbates ground bounce, power-supply noise, crosstalk, and risks blowing out both drivers. It's a bad idea. In either case, engaging two totem-pole drivers at the same time on the same bus provides no useful function.
You can apply that thought to the design of a distributed bus, but first let me present a couple of definitions.
A low-speed bus activates only one driver at a time. After each driver stops transmitting, the bus waits for all signals to "clear the bus" before proceeding with the next driver. Regardless how fast you clock the bus, or how long you make it, it is still a low-speed bus if the drivers go one at a time.
A distributed bus simultaneously activates more than one driver. To make it work, the timing on a distributed bus must be as intricately planned as a ballet. Several signals may occur in transit simultaneously, moving in different directions. If you do it right all the signals arrive at their respective receiver locations at precisely defined times and in the correct order.
A distributed bus need not waste bus-clearing cycles. It works best in systems where the bus propagation delay greatly exceeds the transaction burst length.
(NOTE: The largest distributed bus Marty and I successfully designed had 16 taps. It supported as many as three simultaneous transactions all traveling on the bus at the same time).
A space-time diagram brilliantly depicts the events in a distributed system (Figure 1). The horizontal axis represents the spatial extent of the bus. You may imagine the bus itself stretched across the top of the diagram. The vertical dotted line marked "A" represents the physical position of one transceiver connected to the bus.
The vertical axis represents the progress of time, increasing as you move down the diagram. The third dimension (coming out of the paper toward you, the reader) represents the voltage on the bus at any given point in space and time. Before time T0, the bus exists in a steady-state condition with no voltage at any point.
At time T0, driver A pops ON and remains high for two bit times. It pops OFF at T1. That event precipitates waves moving in both directions away from the source. At the right and left sides of the diagram, representing the ends of the bus, the waves are intercepted by perfect terminations that arrest the progress of incoming signals without creating reflections. The diagram shows the correct bit pattern received at the right end of the bus.
In Figure 1, transaction A completely "clears the bus" before anything else happens. That guarantees that residual portions of wave A do not interfere with subsequent transaction.
The longest amount of time necessary to completely clear the bus happens when you activate a transceiver at either extreme end of the structure. The time required equals one end-to-end bus delay. If you wait that long after every transaction to begin the next, the waves never interfere.
That simple worst-case waiting strategy can be quite wasteful, however. If, for example, two successive chevrons are launched from the same position, no additional waiting should be required. Only when two transactions are launched from opposite ends of the structure is a full measure of the end-to-end bus delay waiting period required.
In theory, how close can you place two chevrons? It turns out that as long as the chevron wave patterns don't touch, each driver begins and ends its operating state looking into a section of transmission line that, at that moment, appears quiescent. Totem-pole drivers work fine under that condition.
In practical terms that means you can slide each transaction up (advanced in time) until it just touches some portion of the previous transaction. Figure 2 shows two wave patterns that come close, but do not quite touch. Transaction A sends a pattern of two ones, B sends three. At the right end of the line, all three ones appear at the receiver, although in the figure the last "one" drops off the bottom of the diagram.
As long as the space-time pattern from A clears position B before you begin transaction B, the drivers do not interfere. The correct signal appears at all points along the structure, albeit with unusual timing. To fix the timing, you must use some form of source-synchronous timing. Let's not worry about that just yet.
Mathematically, the minimum necessary waiting period varies in proportion to the physical distance between successive transceivers locations. See if you can work that out from figure 2. The further apart physically you place the two transmitting locations, the longer you must wait for the effects of A to pass position B.
The "minimum waiting period" principle suggests that a distributed time-domain-multiplexed structure can be most efficiently switched by sweeping the pattern of transmitting locations back and forth from one end of the bus to the other and back again. A sweeping strategy minimizes the physical space between successive transmit locations, and thus the amount of time wasted.
Does the sweeping idea sound familiar? The same principles govern track-seek scheduling algorithms used in highly optimized disk drives.
Bouncing back and forth between transmitters located at opposite end of the structure is the least efficient strategy, because it requires a maximum waiting period after each transmission.
If you do not enforce a sufficient waiting interval between successive transactions, their wave patterns interact. Figure 3 graphically overlaps the two previous drawings, crashing together the transactions. The red zone indicates an area of overlap. In this example transmitter A sends two ones; transmitter B sends three. The diagram shows at the right end of the bus the pattern of data bits I wish to receive.
Unfortunately, within the overlap zone the system does not obey the law of wave superposition. True superposition (adding) of space-time waveforms requires a linear, time-invariant transmission media. The transmission line formed by a pcb trace satisfies those conditions, but not the drivers. Each driver is a time-varying, non-linear device. When the propagating signal from A reaches driver B, something peculiar happens.
From the perspective of driver B, at the moment it switches ON the left-moving signal from A already holds the line high. Your experience with my low-speed example says that when a driver switches ON to a state that matches the pre-existing voltage present at its load, that driver exerts practically no influence on the load. In this case the line is already held high by A, so connecting it through a second totem-pole switch at B to the same logic-high voltage makes little difference. Within the red region B drives no additional new current onto the line. Within the red region the system behaves as if driver B were not activated.
Figure 4 shows the result. At time T2 driver B activates, but it drives no current into the transmission line. As a result, no wave from B propagates to the right until time T3, at which point the signal from A has fully passed. Driver B then begins working, keeping the line high from T3 until the appointed end of its transmission time. Only after time T3 does current flow from B, propagating waves both right and left. Regardless how early you begin the transaction at B, the right-hand side of the diagram receives only that portion of the signal from B starting at time T3. The right-hand waveform arrives absent that portion of B's intended signal transmitted between T2 and T3, in this case turning an intended "1" into a "0".
Upon making his last point, Marty stopped. He picked up a bread stick. "In effect, the backward-moving wave from A interferes with the transmission from B, foreshortening its outgoing wave." Chomping off the end of the stick, he continued, "I call it the nibble effect."
* * * * *
If you don't yet fully grasp the importance of the nibble effect you are in good company. Nobody gets it at first. This is complicated stuff. I didn't really become a believer until I set up some experiments and saw it myself.
The main conclusion that I want you to remember, however, is pretty simple, "Between transactions you must enforce a waiting period proportionate to the physical distance between successive transceivers." If you don't, the nibble effect gnaws away the front end of the second signal. The nibble effect applies to any distributed system using totem-pole drivers.
Next week, Marty presents a work-around that radically alters the waiting requirement. To hear that story, you'll have to come with me to his favorite diner to get some pie...
Speaking of pie, if you know a great place in Rochester, NY, that serves pie and has a full bar, please let me know. That's my favorite kind of joint for a late-night technical discussion.
Dr. Howard Johnson