News Focus
News Focus
Followers 0
Posts 307
Boards Moderated 0
Alias Born 02/14/2004

Re: None

Sunday, 12/11/2005 11:18:29 PM

Sunday, December 11, 2005 11:18:29 PM

Post# of 98355
Leading by design: Q&A with Dr. Raghuram Tupuri, AMD
Chris Hall, DigiTimes.com, Taipei [Monday 12 December 2005]


AMD’s drive to 64-bit processors surprised everyone with its speed, even as detractors commented that there would be little or no performance gain on the desktop without a 64-bit OS and 64-bit applications. Whatever the doubts within the industry, Intel lost little time in offering its own version of AMD64, in the form of the EM64T extensions. Traditionally perceived as the under-dog in the cutthroat world of microprocessors, AMD managed to take the design initiative at exactly that moment Intel was fixated on power consumption and the move to dual cores. In the terms of Intel’s own corporate jargon, it was AMD that had managed to create an inflection point, a key moment of change in the dominant technology.

DigiTimes.com recently had an opportunity to discuss AMD’s approach to microprocessor design with Dr. Raghuram Tupuri, general manager, Microprocessor Solutions Sector – Design Engineering, AMD.

This is the first part of a two-part interview. Part II follows on 13 December.

Q: The consensus within the industry is that AMD64 has clearly been a tremendous achievement for AMD and to a large extent has re-defined the competitive landscape in microprocessors. What was the design approach – if you like, the design philosophy – that guided development of AMD64? It’s often been remarked, for example, that Dirk Meyer, who came to AMD from design work on the Alpha processor, had a considerable influence on AMD64.

A: I worked very closely with Dirk Meyer on the K7/Athlon processor. We both worked on micro-architectural aspects of the K7. He’s a very inspiring and hands-on type of leader, and that’s the management style we have developed in all our microprocessor groups at AMD – very hands-on, with good technical understanding. Dirk always says that he thinks of himself as an engineer at heart, and his background and experience definitely helped him develop his leadership style, walking around, talking with the engineers. He doesn’t miss an opportunity to meet the engineering team and learn about their progress and challenges first hand. That style suits AMD’s design strategy, which is to keep the design teams as small and as experienced as possible, to maximize communication. The key is to enable the team members with all the information they need, leaving them challenged and empowered to do more than their job.

The core design philosophy is to deliver higher performance to the end user. As micro-architectural improvements are achieved, they are evaluated with respect to the end-user experience and future software needs. My own view is that we have not yet reached the limits of what can be achieved by micro-architectural changes and enhancements. As we progress in the technology, and more transistors continue to become available, we will continue to see the adoption of advanced micro-architectural techniques. We will continue to see a lot more pre-fetch, in hardware, and a lot more speculative execution, but at the same time we will be factoring in power consumption.

Delivering performance was always the focus for us. At AMD we have a history of emphasizing performance rather than processor frequency, but now, especially for our corporate users, the question becomes one of power efficiency, performance per dollar per watt. The design challenge now becomes, “How much performance can I squeeze out of the wattage?”

At AMD, a microprocessor designer has to work within a power budget, and each application tends to have a different power profile. For example, if you are converting a DVD video, then you are using one section of the processor core quite heavily, but if you are simply typing an email, it becomes more a question of I/O resources. That’s the type of question being asked in processor design. What application or set of applications are we targeting, and what is the power profile?

Users buy new processors because they want to have more performance, and higher performance is definitely on the horizon, but it has to be delivered within certain power constraints. Previously we operated within a transistor budget. We had a certain number of transistors available, and within that budget we had to improve performance. The die size was always limited, both by economic factors and by the limitations of the technology of the time. We now have more transistors, but certain strings are attached. We now have to improve performance within a power budget.

Q: Commentators have noted that what’s fascinating about the AMD64 architecture is how it maintains full compatibility with the much-maligned x86 architecture while also delivering a RISC operational and programming environment. How accurate is this assessment?

A: There always was tendency at AMD to assume that we could put x86 on top of RISC. I remember when we were working on the 29K RISC processor, we always thought that, with some additional hardware, we could put x86 on top of the RISC core and execute CISC instructions. Once we knew we could implement that kind of design, it was incorporated in several products, including the K7, K6, and K5 and, of course, the Opteron. AMD had had RISC technology for a long time. Its 29K microprocessors were popular in embedded applications. Later, when these 29K engineers moved to x84 processor design, the result was CISC designers who have RISC at heart!

Q: How much more mileage is left in x86? Are there any realistic alternatives, given the less than compelling performance of Intel's IA-64 architecture and Itanium processors?

A: I would be surprised if, a few years down, anyone even remembers the Itanium processor. Intel now offers two 64-bit solutions, IA-64 and AMD64. I think that tells you all you need to know about the success of x84 and AMD64.

I think there is still quite a bit of mileage left in x86, but the actual mileage will be determined by the software base. If I’m a software vendor, what I want to focus on is developing new software and selling it to more end users. If the software needs to be ported to different processor platforms, then it is time or money deducted from the development of new algorithms or the improvement of existing programs. Given the ubiquity of x86, you only need to develop a set of binaries once, and they can then be applied in any number of instances. The processors now known as x86 first began life as desktop processors. Now they are being used in both servers and laptops. AMD64 is now enabling x86 in the server market in a big way. In the near future, we expect AMD64 processors to be used in network switches. Already, they are being used in embedded applications, and later they will be used across the complete computing spectrum. I believe x86 is here to stay, whether you love it or hate it.

Whatever criticisms have been leveled at x86, it remains the longest surviving instruction set. No other instruction set has had this long a lifetime. Other instruction sets have come and gone, but x86 lives on. A technologist may not like x86; it may not be the sleek instruction set everyone would like to see, but in the end, it’s the end users who determine which technology will be used. The marketplace determines the acceptance of a particular instruction set.

In terms of software development, for example, the portability is already there. You take the binary and plug it in, and it runs the same day; you don’t need to do any qualification. If you use any other instruction set and any other porting, then you have to go through all the qualification cycles, and that’s a very expensive option. In the corporate world, it’s simply not possible.

Q: Given the complexity of processors such as the K7 and the K8, and the need to get these devices to market under ever more severe time constraints, what are the test techniques that are either being implemented by AMD? We hear a lot these days about design for test (DFT). Is DFT a key strategy for AMD?

A: It so happens, my first job at AMD was to look at the implementation of DFT strategies on AMD microprocessors and coordinate DFT activities across the company. AMD designers were already using several DFT techniques – SCAN, for example – and we were looking at the long-term strategy for our DFT techniques. The question was how various test technologies might be unified, and how they might scale to future technologies. IDDQ testing, Built-in Self-Test (BIST) and automatic test pattern generation (ATPG) tools were becoming popular. At that time, I was working very closely with a few startup ATPG vendors on a variety of issues and solutions, and the direction AMD wanted to go in. Later, these startups were acquired by big CAD companies. Every AMD chip now incorporates DFT and BIST technology, and we use this combination very heavily.

Q: What design tools are you now using at AMD? Has AMD developed its own EDA tools or does the company rely on standard commercial offerings or a combination of the two?

A: AMD always uses a mix of in-house and external tools. If you look at microprocessor development, we are pushing the envelope in all directions. In general, the semiconductor industry runs somewhere between three to five years behind microprocessor technology. So whatever tools are commercially available we use, but when it comes to pushing the latest technology, we start to use our own internal tool development. We have a team that looks into new design issues that require in-house tool development. But once external tools come into the picture, in exactly the same way we worked with ATPG tool vendors in the past, we work with them to define and incorporate the features we want, and then once the tool is available, we start phasing out our own internal tools and go with the external tool.

An example would be signal-noise analysis. We’ve been doing our own signal-noise analysis since the K7 days. It’s only in the past two or three years that external tools have become available.

Q: In what direction should test go? What new developments would you like to see in test?

A: There are two issues here. For the production environment itself, I would like to see more and more self-test being used, and that includes more reliable self-test procedures. Research is ongoing, at 45 nanometers and below, and that is where the research community should be focusing its efforts in the development of self-test. On the tool side, I think there should also be more emphasis on a diagnostic capability because as we move further and further into deep submicron, process failures are becoming much more complex.

Test compression will be a key factor. Test time is becoming expensive, and test compression reduces cost. That reduced cost can then be passed on to the end user.

This is the first part of a two-part interview. Part II follows on 13 December.
http://www.digitimes.com/news/a20051212PR200.html
Volume:
Day Range:
Bid:
Ask:
Last Trade Time:
Total Trades:
  • 1D
  • 1M
  • 3M
  • 6M
  • 1Y
  • 5Y
Recent AMD News