ericholtman wrote:
So, if I understand correctly..... the real physical hardware on I'm running on is 4 CPUs, each dual core.
I believe most (if not all) hosts have dual, quad-core Xeon processors. But yes, a total of 8 cores available to the host, of which each guest is permitted access to up to 4.
Quote:
So, the graph on the linode dashboard should scale to 400%.
It depends on what "should" means - I don't think it's what the dashboard currently does, if that's what you mean. I believe the dashboard is scaled to a single host CPU core, so 100% on the dashboard is only a single full core. The benefit to that is it's independent of any change in cores for the guest, but the downside is it's a different scale than guest-based utilities, because Linux counts CPU up to 100% * CPU cores.
Quote:
And 'top', if it showed 100% utilization would be 100% of all cores, or the equivalent of 400% on the other graphs. And when I hit '1' in top, I only see 4 CPUs, not 8, because my virtualized instance only has 4.
Yes. Of course, when expanded to show all CPUs, then 100% is per-CPU on each line, as opposed to 100% of all CPUs when shown on a single line in single cpu mode.
Quote:
So, assuming I'm consuming 1/10th of a physical machine, that would mean a steady state 40% number in the dashboard or in munin would be 'fair' (although I am allowed to have periods of higher use).
Well, the dashboard is relative to a single CPU core, so 40% on the dashboard is 40% of 1 core, or 5% of the physical machine (all 8 cores). 1/10 of the physical machine (8 cores) would be 80% of a single core (the dashboard value).
CPU is pretty much always "fair", since in contention it is always shared equally among the guests. If multiple Linodes on the same host are really pushing CPU, at worst, they'll share it equally.
It's true you could derive an expected capacity by dividing total CPU amongst the Linodes sharing that host (sans some overhead for host) yielding a value that could be used as a minimum expected allowance - since all guests running at maximum CPU simultaneously would produce that value. And there's a plausible argument that keeping average utilization close to that limit is being "neighborly" to the other guests sharing your host. Also why running, say, a distributed computation just to absorb free CPU, may not be the nicest thing to do.
But in reality, that minimum percentage (especially for the entry configurations) is quite a bit lower than the CPU available on average in practice to a single guest given statistical sharing. So by and large I'd use the CPU you need, and let it be equally shared with everyone else on the host.
-- David