Flip Flops

I have some fundamental questions [about] Flip- Flops.

  1. What actually causes the metastability in Flip- Flops? I know that within the set up window if the input changes it causes the Flip-Flop to go into the metastable condition. Normally this set up window is specified with respect to the Clock signal. I want to know what causes the Flip-Flop to go to metastability [when] the input changes within the Setup window. Does the Flip-Flop [also] go to metastability if the Hold window is not met? And if so, why?
  2. How do the vendors provide Zero Hold time Flip- Flops. Is it by matching the feed back delay?
  3. What is the special construction in the edge triggered Flip-Flops? In case of a D Flip-Flop the input side will be having a control gate (Usually a Nand gate) in which the Clock signal becomes the control signal. In this case the Gate becomes open till the Clock is active (i.e. low or high) and not till the transition. Does the edge triggered Flip-Flops have a differential circuit to make the width of the clock pulse small? Is there any difference between level triggered Flops and Latches?

Thank you.

Thanks for your interest in High-Speed Digital Design.

About your first question, my model for the internal workings of a flip-flop looks like this:

  1. A comparator (a limiting amplifier) with a threshold voltage Vc, and a large (positive) voltage gain.
  2. A positive feedback network connected around the comparator. In the absence of any other input signal, the positive feedback keeps the comparator latched in its present state.
  3. A sampling circuit that temporarily connects the flip-flop "D" input to the input of the comparator circuit. When this happens, the "D" input signal overwhelms the feedback and switches the comparator rapidly to a state that matches the "D" input.
  4. buffer amplifier connected to the output of the internal comparator, with a threshold Vb

When the clock circuit receives a rising edge, the flip-flop generates an internal pulse, which temporarily connects the flip-flop "D" input to the input of the comparator circuit. This is called the "sampling" operation. Normally, when the "D" input is sampled, if it conforms to the setup and hold rules, it should be at a valid HI or LO level. The comparator thus receives a full-sized input signal, causing a "large-signal response". That is, the comparator is slammed HI or LO right away. The buffer then responds and you have your output. Under these conditions the clock-to-Q time is always predictable (and should always meet the published specifications for the part).

If the input is changing within the setup and hold window, if it is changing just at the moment the sampling circuit looks at it, an intermediate voltage may be impressed upon the input of the comparator. This initially causes an intermediate response at the output of the comparator. The positive feedback network then generates a violent exponential buildup of feedback that forces the comparator to quickly go one way, or the other. When the sampled input signal is very close to Vc, the buffered output may not react until the comparator feedback has had time to build the comparator output up to a recognizable value. The extra reaction delay induced when you sample an intermediate input signal is called the "metastable resolution time".

Modern parts are pretty well-damped internally, so I would be very surprised to see any oscillation during the metastable resolution time. What I expect to see at the comparator output is usually this:

  • First an initial output level related to the difference between the sampled input level and Vc, times the comparator gain).
  • Then, if the initial output is even the tiniest bit more positive than Vc, the feedback will cause an exponential buildup in the positive direction, ending with the output slammed into Vcc, above which it cannot go.
  • Or, if the initial output is even the tiniest bit less positive than Vc, the feedback will cause an exponential buildup in the negative direction, ending with the output slammed into ground, below which it cannot go.

In either case, after a suitable number of comparator amplifier time-constants, the circuit comes to rest. The number of time constants required depends on how closely the initial sampled signal approximates Vc.

(By the way, if you assume the comparator response is exponential, and you assume the sampled waveform has a linear slope with random time of arrival, you can show that the distribution of metastable resolution times should be inverse-exponential. That is, the probability of observing a resolution time greater than T should decrease exponentially with T. Observations confirm this rule, which is the evidence used to suggest that this model is correct.)

In a buffered part, YOU DON'T SEE THE EXPONENTIAL ACTIVITY because the buffer squares it up. At the output of the buffer, depending on the relation between Vc and Vb, you tend to either get one of several effects:

  1. The output does the right thing.
  2. The output does nothing at first, then later pops the other way.
  3. The output goes the wrong way at first, then later pops back.

I'll work out one case for you. Suppose the system begins with the output LO. Let Vcc=3.3V, Vc=1.5V, and Vb=1.4V. Please don't read anything into my choice of numbers, it's just an example.

Let the initial sampled voltage be 1.499V. This is less than Vc, so we know that eventually the system will end in the LO state. What happens initially, however, is that the comparator output jumps to a value near Vc, then begins its descent towards ground. The buffer, because its threshold happens to be a little below Vc, initially responds HI. Later, after the comparator output sinks below Vb, the output changes its mind and snaps LO.

On old unbuffered CMOS logic, or logic built from electron tubes, you can sometimes see the exponential response directly at the output. On modern logic with a buffered output stage, however, you just see a late transition. This late transition causes a problem for the succeeding logic stages only if it happens to change during the setup and hold window for the next stage.

In a state machine, the spread between the required setup and hold times for the next stage following your synchronizer circuit is pretty wide. If the width of this interval is appreciable compared to one internal comparator time-constant then the probability of getting a late transition SOMEWHERE in that window is pretty much the same as the probability of receiving a late transition ANYWHERE after the setup requirement. In state machine work, we generally just compute the probability of a metastable resolution time GREATER than T.

Sometimes we use a chain of synchronizing registers when sampling and asynchronous signal. In a chain of synchronizers, things work a little differently. The metastable resolution delay at the output of the first flip-flop in the chain causes a problem ONLY if it hits RIGHT ON TOP of the actual metastable sampling window for the second flip- flop. Since this second window is extremely narrow (much narrower than the worst-case published specifications for setup and hold times), your MTBF calculations benefit not only from the amount of resolution time T made available by each stage, but also from the width of the resolution windows. In mathematical terms, the output transition from the first stage has to hit between T and T+dT, where dT is the window width of the second stage, in order to cause an error. This effect renders a two-stage (or three-stage) sampler running at rate R almost as effective as a single-stage sampler using a slower clock of R/2 (or R/3).

I think John Wakerly covers a lot of good points about metastability in his book, "Digital Design Principles and Practices", Prentice-Hall, 1990 ISBN 0-13-212838-1. He has a nice "ball and hill" description that I find very helpful.

To make a zero-hold flip-flop, you insert a delay line in series with the D input. Normally, every flip-flop circuit has both a setup and a hold time. In other words, the input must be held stable before, during, and after the clock to make the part work properly. If you insert a delay in series with the D input, you then have to get the D input transitions set up EXTRA EARLY in order to make it through the delay, and still arrive prior to the clock. From the data-sheet perspective, a part with an internal delay in series with the D input will have an extra-large setup time. At the same time that your internal delay has advanced the setup time, it has also advanced the hold time (making it smaller). A suitably large D-input delay can erase the hold time required. That's how you make a zero- delay part. Note that since there is necessarily a substantial uncertainty about the precise amount of delay involved, the advertised setup-to-hold window width of a given flip-flop will always exceed the setup-to-hold window width of a similar non-zero- hold flip-flop implemented without the extra delay.

About the difference between flip-flops and latches, it used to be that people made a big distinction between these two circuits. Here's the definitions that generally applied when I started doing digital logic design in 1970:

LATCH: an asynchronous part that could be set to one state, or reset to another state. The control signals (clocks) to a latch were assumed to be PULSES, that is, part of the latch circuitry started working when the clock went high, but the work wasn't done until the clock came back down low. A lot of the old NMOS circuitry was built using these types of latches, with multiple clock phases used to sequence the processor through successive stages of latches.

FLIP-FLOP: a latch with a funky internal circuit on the clock input, such that every time you put in a rising edge on the clock, the internal circuit generates its own little pulse. The internal pulse then operates the latch. You are correct in thinking that a differentiating circuit might be one way to construct such a circuit.

Latches are simpler, but flip-flops are more convenient to design with. In 1970, if you had been building logic from discrete transistors, (or tubes), the difference in complexity would have seemed significant.

Today, most designers use flip-flops, to the point that many people have forgotten what constitutes a latch. I see many articles and datasheets where the term "latch" is now being used (I think incorrectly) as just another word for "flip-flop".

Best Regards,
Dr. Howard Johnson