vindimy wrote:
You may be right, for system processes, but what about the rest?
From
http://www.codinghorror.com/blog/archives/000942.html:
Quote:
...only rendering and encoding tasks exploit parallelism enough to overcome the 25% speed deficit between the dual and quad core CPUs.
I haven't read the article, but picking the word "parallelism" indicates a potential area of confusion.
Let's say you want to add up the numbers 1 to 100.
The simple method would be 1+2+3+4+...+99+100. Now if you run this it will all run on a single CPU. The other 3 CPUs will be totally idle and doing nothing.
Instead, lets do it as 4 sums:
1+..+25
26+...+50
51+...+75
76+...+100
And then add up the 4 results. This will now use all 4 CPUs and so run (almost) 4 times quicker because the 4 smaller sums can be run in parallel; it's making use of parallelism.
The problem is that very very few programs are designed to make use of parallelism properly. Rendering is (nowadays) one of them; lots of work has been done on those algorithms. I believe Excel 2007 can do some small amount of parallelism. But most programs? Not really. For a lot of programs 1 really fast CPU is better than 4 slower CPUs simply because those programs can only use 1 CPU at a time.
This is actually a big issue in computing at the moment; CPUs are reaching a plateau on per-core performance. Instead of getting mega-Ghz speeds what we're seeing instead is more cores. So dual core, quad core, eight core CPUs... computer scientists and programmers need to come up with a better way of parallelising their code because the free ride of per-core speed increases is almost over. You'll hear lots more about "multi-threaded" applications in the future.
Interestingly, Unix itself is pretty good at multi-core work because it's designed to run with lots of independent processes. For example, apache may fork off 100 seperate httpd processes; each of those can run on a different CPU because there's little interaction between them. But that's fine for small tasks (like serving web pages, email handling, etc); it still doesn't solve the large computationally intensive tasks.
Another take on this is the IBM "Cell" processor and modern graphics card GPU chips; these have lots of independent processing units. GPU and cell programming is "hot stuff".
(OK, I just took a quick glance at that article... yeah, seems to be talking about the same thing that I just wrote about).
As you can hopefully see, parallelism isn't really related to processor affinity. Processor affinity, as I earlier described, is related to cache coherency and in keeping a thread of execution (a process, in this case) accessing data more efficiently by stopping it swapping to other cores. It does nothing to solve the parallelism problem of how to _use_ the other cores.
As for your heat measurements; it's very possible the OS is preferring the first core when it comes to scheduling. *shrug* Dunno; not looked into the kernel scheduler that much! Different OSes may have different performance characteristics.