So then they started to add more cores to devices to make them faster. Parallel processing. Software has to be written for it to be an advantage though. But you never hear any talk of processor speeds these days, it's only number of cores.
----
*A year prior he was releasing a new generation Power Macs, and promising that in another year they would come up to so-and-so clock speed. I thought: "don't say that, you idiot. For one thing, it'll make some people refrain from buying the current version, for another thing you can't promise anything like that". Lo and behold, a year later he stood with his hat in his hand, humbly. Well, as humble as Steve ever gets, nothing dramatic... The weird thing is, Steve never talks about future products normally. The one time he elects to do this... bad timing!
Update:
Bert said:
Blame marketing for the confusion.
For example, did you know that the Core 2 label on Intel processors had nothing to do with dual cores? Seriously! It refers to version 2 of the Core processor architecture. Really. And to be sure that nobody could possibly follow their "logic", they adopted that name just as they were getting ready to release their first mainstream dual core processors, hence the Core 2 Duo, etc. 'nuff said, otherwise I won't be able to remain polite.
Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture.
If we make the former a lawnmower mounted under a racing car, then the latter is a harvester. There is no point in comparing the top speed of the two, the racing car will always be faster. But it is the harvester that will get the most work done, especially as it gets wider and wider!
As for Moore's "law", one should refrain to apply it to any one parameter. If you consider only the actual throughput of current processors, it's not dead yet, there's still lots of progress to be made.
Progress follows the path of least resistance, like pretty much anything in nature. All that one can infer from recent developments is that it has become easier lately to improve on internal architecture than continuing the race for faster clock speeds. And once they run out of ideas in that department, there will be some other area to improve.
One can rest assured that we haven't seen the end of the scale in terms of clock speeds either. Be it through new silicon geometries or some other development, the race will resume when the conditions are right.
-
That confused the hell out of me the last time I looked at computers. I've got an old brick of a machine that has a 2.7ghz processor in it and more recent machines have something like 1.5-2.0 with multiple cores. It just goes to show that higher numbers don't mean "better".
ReplyDeleteYes, I never did find out what happened there when they all took a big jump downwards in advertised clock speed.
ReplyDeleteBlame marketing for the confusion.
ReplyDeleteFor example, did you know that the Core 2 label on Intel processors had nothing to do with dual cores? Seriously! It refers to version 2 of the Core processor architecture. Really. And to be sure that nobody could possibly follow their "logic", they adopted that name just as they were getting ready to release their first mainstream dual core processors, hence the Core 2 Duo, etc. 'nuff said, otherwise I won't be able to remain polite.
Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture.
If we make the former a lawnmower mounted under a racing car, then the latter is a harvester. There is no point in comparing the top speed of the two, the racing car will always be faster. But it is the harvester that will get the most work done, especially as it gets wider and wider!
As for Moore's "law", one should refrain to apply it to any one parameter. If you consider only the actual throughput of current processors, it's not dead yet, there's still lots of progress to be made.
Progress follows the path of least resistance, like pretty much anything in nature. All that one can infer from recent developments is that it has become easier lately to improve on internal architecture than continuing the race for faster clock speeds. And once they run out of ideas in that department, there will be some other area to improve.
One can rest assured that we haven't seen the end of the scale in terms of clock speeds either. Be it through new silicon geometries or some other development, the race will resume when the conditions are right.
Good points.
ReplyDelete"Looking at Intel toys, the point where the use of clock frequency as a performance indicator broke down was precisely when the Netburst architecture was abandoned in favor of the Core architecture."
... but it seemed the same happened at the same time to PowerPC processors... ?
... but it seemed the same happened at the same time to PowerPC processors... ?
ReplyDeleteIncreasing clock speeds became difficult for everyone in the field, you know... :-)
Yes, that's what I mean. You sounded there for a moment like it was connected to and caused by Intel's change of architecture.
ReplyDelete... but it seemed the same happened at the same time to PowerPC processors... ?
ReplyDeleteThe difficulties in going to 45 nm, which you are referring to, was really only a PowerPC hicup. Intel was already fabbing at 45 nm when Jobs announced that "the whole industry" had run into a wall. And now the industry is already moving to 28 nanometers.
You need to understand that Jobs is a marketer, not an engineer or an architect. His statements about technology are rarely accurate.
Huh, I'll be durned.
ReplyDeleteAnother way to put it is that there's always a wall to push through when adopting new processes. But PowerPC development was clearly lagging behind at that point, whereas Intel moved to 45 nm with little difficulty.
ReplyDeleteAnd it was immediately after this episode that Apple switched to Intel. For very good reasons.
For very good reasons.
ReplyDeleteSteve's ego was bruised? :-P
Steve's ego was bruised? :-P
ReplyDeleteThat, but also look what they were able to do with Mac Mini and MacBook Air in terms of power/watt after their switch to Intel.
No can do with PowerPC.
I don't doubt it, In the nineties, PowerPC had such huge promise. But when I went from a PowerMac to a Mac Pro (Intel), the drop-down in noise was *huge*, I was and am still thankful for that.
ReplyDeleteIn previous decades, each new processor generation improved upon the performance of the previous generation by way of more efficient and/or more elaborate designs, as well as better manufacturing processes. Everytime the fabs moved to a new manufacturing process it allowed for etching more circuitry onto the same size chip (or the same amount onto a smaller chip). Taking an existing design and making a smaller version of it (referred to as a die shrink) could almost always result in a faster and more energy-efficient chip.
ReplyDeleteAfter reaching the 130nm process (Pentium 4, Athlon 64 era) roughly ten years ago, die shrinks have been providing diminishing returns. They can still pack more transistors into one chip, but the efficiency and clock speed gains have been minimal when compared to what they were in the past.
Making single-cores faster became that much more difficult, while putting multiple cores on one die became cheaper. So that's what happened.
As far as I'm concerned the marketing is FUBAR these days. The only way to get any clue what one is buying is to google the model number of the CPU, because there are too many separate numbering schemes.