The IEC awards grants to students and professors to attend IEC events through its University Program. This gives professors and their students a chance to view their field of study from the industry's perspective, and to interact with practicing engineers in their field.
At DesignCon 2003, Ralph Cavin, Vice President of the Semiconductor Research Corporation and a member of the IEC Board of Directors, chaired a panel discussion for these students and professors. The panel was titled, "The Next Generation Electronic Design Engineer: Challenges and Opportunities."
I was invited to address the panel, as was Brian Bailey, Chief Technologist at Mentor Graphics. The format called for short presentations from the panelists, followed by an extended discussion period with the audience. My purpose was to introduce the students to some of the issues surrounding high-speed digital design and encourage them to take up a career in electronics. What follows are my slides and speaking notes.
I've been asked to put together my thoughts on the state of education for digital engineers. This is a good topic for me, as through my books, seminars, and my work at the University of Oxford I contact literally thousands of designers every year. This experience has led me to a certain understanding of their capabilities.
The first thing to realize is that digital design is becoming progressively more difficult.
I tried to capture that sense of difficulty with the artwork on this slide.
The difficulty comes in part because customers now expect more from their systems, in terms of power, reliability, low cost, EMC, wireless operation, speed and better user interfaces.
The difficulty is compounded by the fact that, as a class, digital engineers are less well equipped to face the problems encountered in high-speed design than they were 30 years ago.
In the distant past, prospective students of computer architecture all took a common electrical engineering curriculum. This standard curriculum included basic analog circuits, transmission lines, and linear systems theory—three crucial subjects required to understand how high-speed digital hardware really works.
At the time, such an education seemed like overkill. The computer hardware of that early era was so slooowwww that few people needed to know much about analog circuitry to make their systems function.
For example, a typical LS-TTL logic gate has a rise/fall time of about 20 nanoseconds. By today's standard, that is very slow. It doesn't take an analog guru to plug together a lot of LS-TTL logic.
As a result, college and university administrators have begun to feel they can safely drop the old analog curriculum in favor of newer, more modern computer science classes. There are even those who suggest that analog design is no longer relevant.
I don't hold anything against the field of computer science. It's a wonderful discipline. Without it we wouldn't have the CPU sophistication we have today, or nearly as many people trained to work with complex computer architectures.
I don't blame our colleges and universities, either, for the decisions they have made. A modern university has a lot on its plate. Information overload is a terrible problem. There are too many subjects to teach, and too few hours available in which to teach them. I understand that.
The result, though, has been near disaster for many high-speed design projects.
In the 1970's, only a few guys at Cray, IBM and Amdahl needed to understand signal propagation.
Today, all of us need the same level of understanding.
Dropping analog circuits from computer-science curriculum will prove a costly mistake.
I'm going to illustrate the importance of analog circuit concepts with three examples showing how inadequate education gets in the way of good high-speed design.
First, look at any digital schematic. Consider the logic nets that carry digital signals from gate to gate. Basic circuit theory teaches that logic signals are propagated by the flow of current, specifically by the movement of electrons, and that these currents always flow in loops.
One property of electron flow is that the electrons don't stack up anywhere. They don't just flow into the receiver and pile up in a bit bucket. The intense electric fields generated by each electron prevent that from happening. Whatever current goes out must come back, and the path for returning signal current is equally as important as the path for outbound signal current.
On the schematic, however, the paths for returning signal currents are not even shown.
Many digital engineers therefore assume that the return paths are irrelevant. After all, both drivers and receivers are specified as voltage-mode devices, so why worry about the current?
This great misconception is reinforced by manufacturers of oscilloscopes and logic analyzers who primarily market voltage-mode probes. If we had good current-sensing probes with a pinpoint proximity sensor small enough to see the current flowing on an individual BGA ball, the flow of current would suddenly become a "reality" for many engineers rather than a merely theoretical concept.
A misunderstanding of the need for a good return-current path results in two very common system flaws:
High-speed buses that flow across slots and gaps in digital ground planes, thereby picking up inordinate amounts of crosstalk, and
Connectors with inadequate numbers of ground pins.
I attribute the lack of understanding of magnetic fields to our educational system, with its disproportionate focus on electric field behavior.
This belief is a relic of the tube era, which was characterized by very high-impedance circuits. For example, the plate circuit of a tube might have an impedance of 100,000 ohms, much higher than the impedance of free space (377 ohms). Such a circuit involves HUGE voltages and tiny currents. Therefore, near-field energy surrounding a tube exists predominantly in the electric field mode, and most crosstalk problems involve electric-field, or capacitive, coupling.
Today's high-speed digital systems use low-impedance circuits, near fifty ohms, much lower than the impedance of free space. These circuits use tiny voltages, but HUGE currents. Therefore, the near-field energy surrounding a digital circuit exists mostly in the magnetic-field mode, not electric. Most crosstalk, ground bounce, and interference problems in high-speed digital systems involve loops of current, magnetic fields, and inductance.
In world of EMC, it is common knowledge that the near-field energy surrounding a digital board is mostly magnetic. Digital people don't know about that. Digital folks waste inordinate amounts of time trying to construct electric-field shielding for their circuits when what they need is magnetic-field shielding.
On a typical product datasheet the input voltage sensitivity is rated in units of absolute volts. It is not clearly stated that the gate responds only to the difference between the voltages on its input pin and its designated reference pin. Nor are we clear about which is the designated reference pin. (For TTL it's usually the most negative power rail; for ECL it's the most positive, but this rule doesn't always work.)
This ambiguity in the designation of the signal reference leads many engineers to think that a gate can sense "absolute zero" volts, as if it had a magic wire leading out of the chip to the center of the earth that could pick up a "true" ground reference potential. As a consequence, they fail to comprehend the difficulties that arise when the reference voltages at two points in a system are unequal.
No vendor wants to admit that all their chips are susceptible to ground shifts, so we can't expect them to talk about it on their data sheets. On the other hand, we all need to remember that large ground shifts between chips are likely to cause malfunctions.
Most digital designers have spent little time thinking about the existence of different ground potentials in their systems, or the mechanisms that create ground shifts.
So, if our engineers are lacking key bits of knowledge, what do we do? Nobody can change our educational structure overnight.
I don't have a complete answer, but I can tell you briefly about one model program that has been recently established in the U.K. at the University of Oxford. I've been teaching my two-day seminar format there for about ten years, where it has become the most popular engineering short-course in the history of their continuing education program.
We have recently expanded the high-speed digital engineering program, bringing together four top experts in the field. Each delivers a two-day short course focusing on their particular speciality. These courses are designed for people in industry who, for whatever reason, decide they need a better understanding of high-speed effects. Tuition for the vast majority of students is paid by their employers.
The courses are advertised through mass mailings, magazine editorials, emails to former students, and the University web site. This past year, in a terrible economy, the series brought in a total of 156 students.
In subsequent years I look forward to developing graduate-level research programs at the University for students specializing in high-speed digital engineering. If there are any of you in the audience, by the way, for whom that sort of program sounds appealing, please let me know.
If the graduate program works, I'll drill down into the undergraduate curriculum to see what changes we can make there to better equip digital designers for the work of the future.
Thanks for listening, and now I'll open the floor for discussion.