absolutefunk wrote:
Okay so if i keep the max CPU resources for F@H down below the guaranteed CPU set aside for my linode, that should be fine right? What I don't understand (from the IRC chat posted) is how can a process set to use a fraction of CPU time max still use 100%? Granted I'm referring to setting the limit using the prog itself (as F@H supports this internally). I doubt I'll ever run F@H on my linode, I just don't understand why this would be a prob if your capping what the prog can use to begin with. If your running within your guaranteed limits for CPU, it shouldn't slow down the host. UML is new to me in case you guys didn't figure it out already

-Brian
Think about it this way:
You've got a slice of the pie DEDICATED to you. You can use that slice, and ANY process you run has priority over it.
If that slice is idle, then other people can 'borrow' it for short times for CPU intensive apps. The fact that everyone will *not* be using 100% of their slice at all times helps to make UML a bit better.
If you run some dist. computing client, you'll end up using your slice 100% of the time, and possibly even 'borrowing' some other people's slices who aren't using theirs.
I think, if nothing else, it would be HIGHLY impolite to others on your host.
So, basically, if you're on host40, and you do this, I will track you down
