Living on the Edge (Rate) — It’s All About Time and Distance
When it comes to the PCB design process references are made to “edge rates” in various ways, but a clear understanding of the science behind the concept is often not made abundantly clear. Some people may assume edge rate refers to the connection between an IC and board, while others believe that it is tied to clock frequency. Neither of these instances are correct as edge rate is really about rise and fall times and the length of nets.
The focus of this article is to clarify the origins of the phrase edge rate, what it really means relative to the design process and the best means for addressing it.
The Definition
In its simplest form, the term edge rate is the rate of change of a logic switching edge. It is measured in volts per nanosecond or a similar measurement system. (For the purposes of this article, I rely on volts per nanosecond). The key thing to keep in mind: Edge rate is not the connection between an IC and a board, nor is it the clock frequency.
As Lee Ritchey, Founder and President of Speeding Edge notes, “When we talk about edge rates we are really talking about the rise and fall times required for a signal to switch between two voltage levels.”
Not so many years ago, a rise and fall time was 20 ns. Now, we are in the picosecond range—about a dozen or so picoseconds. That’s the time element referred to above. But, a design doesn’t stop at the time element. The length of the transmission line also needs to be factored in. Thus, in the final analysis, the edge rate is about two things—time and the distance. Ritchey explains, “We are concerned about travel time for two reasons. First, we need to make sure that the signal gets to where it’s supposed to go at the right time. If the path is too long, there might be a timing problem. Next, we need to look at how long the net is compared to the rise time.”
He adds, “With respect to edge rates, or more precisely, rise and fall times, in most modern designs the problem is not clock timing. Nearly all the signal paths are differential and, as such, they are self-clocking.” To better grasp the concept of time vs. distance, the average velocity of a signal in a PCB is about six inches (15 cm) per nanosecond. Once that concept is understood, it’s necessary to compare the rise time to the length of the path. These two elements together will provide the travel time necessary for timing a circuit or, more accurately, arriving at the rise and fall times for that circuit. So, in this context, the discussion centers on reflections.
With respect to rise time versus length of path, we use the example of a signal with a one nanosecond rise time. Next, we take the length of this rise time, which, as noted above, is six inches (15 cm). At ¼ of that length, which equates to 1 ½ inches or 3.75 cm, reflections begin to be a potential problem. Taking this one step further, 100 ps is 1/10th of a nanosecond, and 1/10th of 1.5 inches is 150 mils. This becomes the length where product developers need to start worrying about the rise and fall times in their designs.
“Nowadays, a 100 picosecond edge is common,” Ritchey states. “I advise my students that if they don’t know the rise time of the circuits, go ahead and design for 100 picoseconds because sooner or later that rise and fall time is going to show up.”
Reflecting on the Problem
Based on the dynamics of the dimensions noted above, reflections become the main issue of concern.
And, based on this, two pieces of engineering must be done. They include:
- Controlling the impedance of the transmission line.
- Adding terminations.
If we expand on the measurements noted above, we can look at what happens when there is a rise and fall time of 10 picoseconds. As stated above, 10 picoseconds are 1/10th of the 100 picoseconds that we use as a starting guideline. Based on this, we can determine that any line 150 mils or longer requires control. With the preceding 10 ps edge, a connection of 15 mils requires control. It’s not possible to have any traces that short in a PCB, but they do exist by the millions in IC devices.
Ritchey states, “If you say the rise time is a nanosecond, the standard definition is the time between 10% voltage value and 90% voltage value. That’s the rise and fall time. So, when we use the term edge rate, we are really talking about volts per nanosecond.” If the edge rate is given as 1 volt per nanosecond, with a signal swing of only one volt, the rise time is one nanosecond. If the signal swing is five volts, the rise time becomes five nanoseconds.
Ritchey notes, “I tend not to use the term “edge rate” because, if you do, you have to know what the signal swing is. If, instead, we say the rise time is ‘X’, everybody has a clear understanding of what is meant by that.”
Complicating the issue is that with each new iteration of a semiconductor, the rise and fall times change. And, the length of the gate in the transistor becomes the yardstick.
Ritchey explains, “If we say 150 nm, that’s how long the gate is. When we scale down, the gate length gets shorter, and the rise times get faster.”
“For as long as I have been in this business,” he continues, “all the speed improvements have come from making the capacitors you have to charge up smaller. That’s why we get much more computing power for the same number of watts. The whole business of logic is charging and discharging the parasitic capacitance of the transmission lines and the input gates of the transistors.”
“In addition to the gate capacitors getting smaller, the parts have also gotten smaller, and they are closer together. This means the transmission line capacitance that is charged is smaller. I sometimes think I need to sit down and do the math of how many calculations we have gotten to per watt because it is probably twice Moore’s law in terms of the gains we have achieved in the number of MIPs per watt,” he adds.
Note: Moore’s law originally stated that the number of transistors on a microchip doubles every two years.
Ritchey notes, “As an example, the Amdahl computer, which was up and running in the mid-70s, was about 1 MIP for 20 KWs. The typical mobile phone is about 200 MIPs, and people complain because the battery runs down in about a day.”
“In 2002, I provided consulting services for the design of a supercomputer that was running one trillion floating-point operations per second. Last year, the next rev of that computer ran 200 trillion flops for two megawatts. This represented a two hundred times performance increase over the time span of 17 years. I think we are finally nearing the limit because how can we make things (rise and fall times) go up and down any faster than that? To go up and down in 10 ps is asking for too much in terms of traditional PCB and IC implementations. That’s why people are advocating for things such as PAM4.”
If there is one common mistake that designers continue to make it is to look at the clock frequency to determine whether a design is fast or slow or a net is critical or not. We had one instance when a designer thought a power-on reset line was not critical because it only happened when the power was turned on. He ignored the impact of this operation and the design failed because of reflections. Thinking that there is such a thing as a non-critical signal is a big mistake. We also provided consulting for the design of a pulse oximeter. The clock frequency was only 1 MHz, so the product developers decided that they did not need to control the impedance. This design also failed from uncontrolled reflections.
Ritchey concludes, “The hardest thing is getting people to understand the relationship between time and distance. I have not seen any engineers who come out of college understanding that concept. More importantly, professors and instructors don’t understand that this is a critical factor that product developers need to know to do their jobs correctly.”
Summary
Thinking that edge rates are a function of the interconnect between the board and IC or the result of clock frequency ignores what is transpiring. Rise and fall times along with the length of the nets are the factors that need to be accounted for to ensure that a product works as designed the first time and for the life of the product.
Have more questions? Call an expert at Altium or continue reading about adopting signal integrity in your high-speed design process.
Reference
- Ritchey, Lee W., and Zasio, John J., “Right The First Time, A Practical Handbook on High Speed PCB and System Design,” Volume 2.