STANFORD, Calif. — Hot Chips is an appropriate term for a conference on future trends in microprocessors. On a rare hot day here in Memorial Hall on the campus of Stanford University, even the air conditioners fail to counterbalance the heat from notebooks that adorn practically every lap in the auditorium.
Inside, the talk among engineers and computer scientists is around multi-core and all things multi-core. Intel and AMD have shifted their strategy from clocks to cores and every demonstration, from graphics cards to research projects, were showing off their multi-core efforts as well.
The problem is while the hardware engineers have made a monumental effort to build the multi-core machines, the applications have not come. That’s because parallel programming is a complicated science that’s driving even the impressive collection of PhDs at this show up a wall.
“A lot of it is compiler science that needs to be updated to make programming [multithreaded applications] easier, and it will happen,” Peter Glaskowsky, technology analyst for Envisioneering, told internetnews.com. “Multi-core is really good at a narrow class of applications. A lot of people are doing a lot of work so multi-core will benefit many kinds of applications.”
But just throwing cores at the problem won’t help without careful design, said Erik Lindholm, an Nvidia engineer and veteran of Silicon Graphics in his keynote speech. Lindholm was discussing the scalar design of Nvidia’s most recent video chip, the G80, which is found in the 8800 line of cards.
“You can’t build infinitely wider hardware, your scalability goes down,” he said. There must be balance between workload units. In the case of a video card, that means balancing the pixel processors, vertex engines and triangle animation. “You don’t want to emphasize one part of the shader and stall out another. That will cause bubbles in the pipeline.”
Nvidia (Quote) discussed its Compute Unified Device Architecture, or CUDA, a technology for writing applications in the C language (define) that utilize the computation power of the G80. The company has introduced a line of computers under the Tesla brand name.
The Tesla products are designed to aid in heavy computation projects, especially floating-point calculations, in science and medicine. The G80 can handle up to 12,288 threads and has 128 thread cores. CUDA is designed to address the threading problem by allowing a programmer to write multi-threaded applications with just a few lines of C code.
AMD followed with a demonstration of its HD 2900 video card, but stuck to promoting it as a graphics processor. “To us, whether you are playing video or doing 3D, it’s a form of decoding and decompression… so our view of the graphics chip is it’s a decoder and decompressor,” said Mike Mantor, a Fellow at AMD (Quote).
Intel (Quote) showed off its 80-core prototype, which was designed to be a network on a chip with teraflop performance, and running at under 100 watts. The caveat to this prototype is that it’s not compatible with x86 systems. Right now, it remains a lab experiment.
The chip uses a tile design for the cores, in an eight-by-ten grid. Each tile has a router connecting the core to an on-chip network that links all the cores together, rather than make them go through the frontside bus like its Core 2 and Xeon processors. Due to its advanced sleep technology, Intel estimates it cuts two- to five-fold reduction in power leakage.
The many-core speeches continued with Madhu Saravana Sibi Govindan of the University of Texas at Austin, who discussed UT’s own multi-core project, TRIPS (The Tera-op, Reliable, Intelligently adaptive Processing System).
TRIPS uses a design known as EDGE, Explicit Data Graph Execution, which executes a stream of individual instructions as a block. Processors today function by executing instructions one at a time, very fast. EDGE attempts to run as many instructions as possible in one block.
TRIPS can execute up to 16 instructions per cycle, whereas the Intel Core 2 processor can only do 4. Because of its large blocks, a 366Mhz prototype was able to flatten a Pentium 4 in some benchmarks, while it was flattened in others. At this point, the processor and code for it is still in the development stages and Govindan said maximum performance required hand coding, a skill not many people have acquired.
This article was first published on InternetNews.com. To read the full article, click here.
Ethics and Artificial Intelligence: Driving Greater Equality
FEATURE | By James Maguire,
December 16, 2020
AI vs. Machine Learning vs. Deep Learning
FEATURE | By Cynthia Harvey,
December 11, 2020
Huawei’s AI Update: Things Are Moving Faster Than We Think
FEATURE | By Rob Enderle,
December 04, 2020
Keeping Machine Learning Algorithms Honest in the ‘Ethics-First’ Era
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 18, 2020
Key Trends in Chatbots and RPA
FEATURE | By Guest Author,
November 10, 2020
FEATURE | By Samuel Greengard,
November 05, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
November 02, 2020
How Intel’s Work With Autonomous Cars Could Redefine General Purpose AI
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 29, 2020
Dell Technologies World: Weaving Together Human And Machine Interaction For AI And Robotics
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
October 23, 2020
The Super Moderator, or How IBM Project Debater Could Save Social Media
FEATURE | By Rob Enderle,
October 16, 2020
FEATURE | By Cynthia Harvey,
October 07, 2020
ARTIFICIAL INTELLIGENCE | By Guest Author,
October 05, 2020
CIOs Discuss the Promise of AI and Data Science
FEATURE | By Guest Author,
September 25, 2020
Microsoft Is Building An AI Product That Could Predict The Future
FEATURE | By Rob Enderle,
September 25, 2020
Top 10 Machine Learning Companies 2021
FEATURE | By Cynthia Harvey,
September 22, 2020
NVIDIA and ARM: Massively Changing The AI Landscape
ARTIFICIAL INTELLIGENCE | By Rob Enderle,
September 18, 2020
Continuous Intelligence: Expert Discussion [Video and Podcast]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 14, 2020
Artificial Intelligence: Governance and Ethics [Video]
ARTIFICIAL INTELLIGENCE | By James Maguire,
September 13, 2020
IBM Watson At The US Open: Showcasing The Power Of A Mature Enterprise-Class AI
FEATURE | By Rob Enderle,
September 11, 2020
Artificial Intelligence: Perception vs. Reality
FEATURE | By James Maguire,
September 09, 2020
Datamation is the leading industry resource for B2B data professionals and technology buyers. Datamation's focus is on providing insight into the latest trends and innovation in AI, data security, big data, and more, along with in-depth product recommendations and comparisons. More than 1.7M users gain insight and guidance from Datamation every year.
Advertise with TechnologyAdvice on Datamation and our other data and technology-focused platforms.
Advertise with Us
Property of TechnologyAdvice.
© 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this
site are from companies from which TechnologyAdvice receives
compensation. This compensation may impact how and where products
appear on this site including, for example, the order in which
they appear. TechnologyAdvice does not include all companies
or all types of products available in the marketplace.