Tuesday, June 28, 2011

"Dual core technology"

It's funny, until maybe five years ago, they were always promoting clock speed in processors. That was progress, that's how you promoted it. But then at some point they said, in the words of Steve Jobs, "the whole industry has hit a wall" *. Some technical problems were prohibiting them from making the circuit pathways thinner, and thus making the chips faster. Progress in clock speed since then slowed dramatically. I suppose it killed Moore's law. But heck, that lasted over half a century, amazing for any tech prediction.

So then they started to add more cores to devices to make them faster. Parallel processing. Software has to be written for it to be an advantage though. But you never hear any talk of processor speeds these days, it's only number of cores.

----
*A year prior he was releasing a new generation Power Macs, and promising that in another year they would come up to so-and-so clock speed. I thought: "don't say that, you idiot. For one thing, it'll make some people refrain from buying the current version, for another thing you can't promise anything like that". Lo and behold, a year later he stood with his hat in his hand, humbly. Well, as humble as Steve ever gets, nothing dramatic... The weird thing is, Steve never talks about future products normally. The one time he elects to do this... bad timing! 


Update: 
Bert said: 

Blame marketing for the confusion. 

For example, did you know that the Core 2 label on Intel processors had nothing to do with dual cores? Seriously! It refers to version 2 of the Core processor architecture. Really. And to be sure that nobody could possibly follow their "logic", they adopted that name just as they were getting ready to release their first mainstream dual core processors, hence the Core 2 Duo, etc. 'nuff said, otherwise I won't be able to remain polite.


Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture.

If we make the former a lawnmower mounted under a racing car, then the latter is a harvester. There is no point in comparing the top speed of the two, the racing car will always be faster. But it is the harvester that will get the most work done, especially as it gets wider and wider!

As for Moore's "law", one should refrain to apply it to any one parameter. If you consider only the actual throughput of current processors, it's not dead yet, there's still lots of progress to be made.

Progress follows the path of least resistance, like pretty much anything in nature. All that one can infer from recent developments is that it has become easier lately to improve on internal architecture than continuing the race for faster clock speeds. And once they run out of ideas in that department, there will be some other area to improve.

One can rest assured that we haven't seen the end of the scale in terms of clock speeds either. Be it through new silicon geometries or some other development, the race will resume when the conditions are right.
-

13 comments:

Datamancer said...

That confused the hell out of me the last time I looked at computers. I've got an old brick of a machine that has a 2.7ghz processor in it and more recent machines have something like 1.5-2.0 with multiple cores. It just goes to show that higher numbers don't mean "better".

Eolake Stobblehouse said...

Yes, I never did find out what happened there when they all took a big jump downwards in advertised clock speed.

Bert said...

Blame marketing for the confusion.

For example, did you know that the Core 2 label on Intel processors had nothing to do with dual cores? Seriously! It refers to version 2 of the Core processor architecture. Really. And to be sure that nobody could possibly follow their "logic", they adopted that name just as they were getting ready to release their first mainstream dual core processors, hence the Core 2 Duo, etc. 'nuff said, otherwise I won't be able to remain polite.

Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture.

If we make the former a lawnmower mounted under a racing car, then the latter is a harvester. There is no point in comparing the top speed of the two, the racing car will always be faster. But it is the harvester that will get the most work done, especially as it gets wider and wider!

As for Moore's "law", one should refrain to apply it to any one parameter. If you consider only the actual throughput of current processors, it's not dead yet, there's still lots of progress to be made.

Progress follows the path of least resistance, like pretty much anything in nature. All that one can infer from recent developments is that it has become easier lately to improve on internal architecture than continuing the race for faster clock speeds. And once they run out of ideas in that department, there will be some other area to improve.

One can rest assured that we haven't seen the end of the scale in terms of clock speeds either. Be it through new silicon geometries or some other development, the race will resume when the conditions are right.

Eolake Stobblehouse said...

Good points.

"Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture."

... but it seemed the same happened at the same time to PowerPC processors... ?

Bert said...

... but it seemed the same happened at the same time to PowerPC processors... ?

Increasing clock speeds became difficult for everyone in the field, you know... :-)

Eolake Stobblehouse said...

Yes, that's what I mean. You sounded there for a moment like it was connected to and caused by Intel's change of architecture.

Timo Lehtinen said...

... but it seemed the same happened at the same time to PowerPC processors... ?

The difficulties in going to 45 nm, which you are referring to, was really only a PowerPC hicup. Intel was already fabbing at 45 nm when Jobs announced that "the whole industry" had run into a wall. And now the industry is already moving to 28 nanometers.

You need to understand that Jobs is a marketer, not an engineer or an architect. His statements about technology are rarely accurate.

Eolake Stobblehouse said...

Huh, I'll be durned.

Timo Lehtinen said...

Another way to put it is that there's always a wall to push through when adopting new processes. But PowerPC development was clearly lagging behind at that point, whereas Intel moved to 45 nm with little difficulty.

And it was immediately after this episode that Apple switched to Intel. For very good reasons.

Bert said...

For very good reasons.

Steve's ego was bruised? :-P

Timo Lehtinen said...

Steve's ego was bruised? :-P

That, but also look what they were able to do with Mac Mini and MacBook Air in terms of power/watt after their switch to Intel.

No can do with PowerPC.

Eolake Stobblehouse said...

I don't doubt it, In the nineties, PowerPC had such huge promise. But when I went from a PowerMac to a Mac Pro (Intel), the drop-down in noise was *huge*, I was and am still thankful for that.

Anonymous said...

In previous decades, each new processor generation improved upon the performance of the previous generation by way of more efficient and/or more elaborate designs, as well as better manufacturing processes. Everytime the fabs moved to a new manufacturing process it allowed for etching more circuitry onto the same size chip (or the same amount onto a smaller chip). Taking an existing design and making a smaller version of it (referred to as a die shrink) could almost always result in a faster and more energy-efficient chip.

After reaching the 130nm process (Pentium 4, Athlon 64 era) roughly ten years ago, die shrinks have been providing diminishing returns. They can still pack more transistors into one chip, but the efficiency and clock speed gains have been minimal when compared to what they were in the past.

Making single-cores faster became that much more difficult, while putting multiple cores on one die became cheaper. So that's what happened.

As far as I'm concerned the marketing is FUBAR these days. The only way to get any clue what one is buying is to google the model number of the CPU, because there are too many separate numbering schemes.