Apple (Like MS) is working hard on multi-core processing.
What a surprise it was when Moore's law suddenly stopped being so reliable a couple years back. I remember when the G5 was introduced and Steve Jobs publicly promised that within a year it would be up to 3Mhz. I thought "don't promise that you so-and-so, you can't possibly know that". And lo and behold, a year later he had to eat crow because "the processor industry has hit a brick wall". And since then everybody has been going with more and more processors instead of the faster ones they can't get. But the problem is that making the software to take advantage of more than a couple of processors is apparently really hard.
6 comments:
It is really hard indeed. Parllelism has some unique problems that are pretty diffucult to overcome. Most of them can be narrowed down to "resource sharing". The same resource can (should) only be accessed once at any given time. So you need all sorts of mechanisms (locks/mutex/semaphores/whatever you want to call it) to prevent this. It's another layer of complexity on an already complex system.
Some tasks simply can't be parallelised and have to be executed sequentially.
Having developed software for multiprocessor embedded systems for the last 30 years, I can say that solving the development problems are not impossible. They are just different. There are many design techniques that can be used to create such systems, but they are unfamiliar to untrained developers. Looks like it's time for me to move into training - LOL!
There are languages which were specifically written for fork and join operation, such as LISP, more recently Concurrent C and dah forgot. Verilog is a parallel language by definition, but is more for hardware.
We were working on a chip which took a coarse approach to parallelism, we had 24 DSP's, 8 RISC engines and a handful of memory transfer engines. They worked on block of data per engine, so they would all be given say a rectangle of an image, then process that sub image, then rejoin the finished one. Here part of the problem was to re-assign compute resources to tasks, like at immigration, the citizens only counter, the high priority task, reverts to non citizen when the citizens are processed. Load sharing.
Pipelining of engines is another form or parallelism, which is very common. This has the CPU reading one instruction, as it is fetching data for the previous, while computing for the one prior to that while saving the results from the first instruction. Fordism applied in microcosm.
What if the instruction in execution needs data that was changed in the instruction that is still saving? The levels of sharing data between three adjacent instructions is bad enough. Now imagine you can have two instructions at each pipestage! That is basically what a Duo is doing, alternating instructions completed by two compute engines in one die.
There are also support engines for the CPU, as well as memory management prefetching disk to ram, ram to cache, floating point co-processors, video drivers. They all have to share memory buffers at their interface. Locking mechanisms started the day they wanted two terminals to share one file, it is common in databases and multi tasking OS's, but they were previously abstracted from application engineers.
You got me thinking now. One of our architects from Cradle is now an architect at Apple, working on their SoC solutions.
We were just ahead of the game, and no-one believed us back then.
Want a cheap & easy prediction? Truly useful multi-core apps won't come from the big dogs.
They have yet to properly harness what they have in hand (still can't figure out how a web browser can bring a 3GHz P4 down on its knees, but they both managed it).
What will they do with, say, 10 cores? Make Word even slower by adding another dozen abstraction layers so that it can spread across cores? Bah.
Likely in a foreseeable future: an open-source 'nix that allows you to boot up virtually physical (how do you call a virtual machine that has its own hardware core?) 'nix, Win or Mac machines, all co-existing on the same platform.
And ever more immersive games, of course.
Just an FYI...
Attack the parallel worlds of parallel programming
Post a Comment