Transition From Synchronous To Asynchronous Chips
3 Pages 661 Words
In 1965, Gordon Moore, the co-founder of Intel made the observation that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. He predicted that this development would continue in the future and it has. Although the initial observation of transistors doubling every year has slowed down, it does still exists; only now they double every eighteen months. There is, however, a problem with this rate. By the year 2012 Intel plans to have the ability to integrate one billion transistors into a chip which will run at a speed of ten gigahertz. However, shortly after in the year 2017, the physical limitation of wafer fabrication technology will have been met. This means that either consumers of these microprocessors will have to be content with a speed of ten gigahertz, or a new method for making chips will have to be developed.
Traditionally, computer microprocessors have used the method of silicon integrated circuits which was invented by Robert Noyce in 1958 and has been used ever since. This method allowed many transistors to be put on a single chip, and this allowed the creation of computers that were smaller, faster, and cheaper than ever before. Since these microchips were being created faster, a method of measuring performance had to be created, so a clock was placed inside the microprocessor. The processor clock is a circuit that emits a series of pulses with a precise pulse width and precise interval between consecutive pulses.[1] The time interval between the corresponding edges of two consecutive pulses is know as the clock cycle time. This method of measuring performance is quickly approaching its limits, and because of this the makers of these chips are looking into moving towards a “clockless” logic.[2]
In order to move from the clock type processors existing today to the “clockless” processors of the future, chip developers are working on a i...