Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Mon Aug 16, 2004 9:21 pm 
Offline
Senior Member

Joined: Sun Mar 14, 2004 9:18 pm
Posts: 116
Website: http://michael.susens-schurter.com/
WLM: mschurter@yahoo.com
Yahoo Messenger: mschurter
Location: Peoria, IL
so originally had a linode 96. whenever i installed packages the system would come to a standstill. in fact some web pages wouldn't even load until a package had finished installing.

so we upgrade to a 128 (for various reasons), but the same thing still happens. i try to do small updates at a time, but the problem is that whenever i run 1 cpu (and disk intensive?) process the whole systems seems to wait for it to finish.

i don't upgrade during peak hours, but i'm still worried that if someone hits one of my web pages during an update, they'll give up before it loads.

i've tried renicing dselect/apt-get, and that seems to help, but they just seem to "share" poorly in general.

has anyone else experienced this or am i crazy? is dselect/apt known for bring systems to a crawl? (i've usually a slackware user)

or here's my crazy theory: is it a UML process scheduling problem? it just seems like the process scheduler lets intensive processes hog the system while light-weights just have to wait for the big boys to finish.

has anyone (caker? ;) ) experimented with UML's process scheduling? i know that process scheduling in general is a hot topic in the linux kernel.

thanks in advance for any insight!


Top
   
 Post subject: I/O is the problem
PostPosted: Wed Aug 18, 2004 5:10 am 
Offline
Senior Member

Joined: Thu Apr 15, 2004 3:18 am
Posts: 52
Website: http://www.rumble.net/
Location: London, UK
I think you'll disk I/O is the problem. Both running out of RAM and the massive amounts of disk activity that working with packages creates.


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 6:39 am 
Offline
Senior Member
User avatar

Joined: Mon Jun 23, 2003 1:25 pm
Posts: 260
If it is a disk IO problem, you can take a look at the IO limiter stats by

cat /proc/io_status


Adam


Top
   
PostPosted: Wed Aug 18, 2004 6:55 am 
Offline
Junior Member

Joined: Thu May 13, 2004 8:08 am
Posts: 27
Hi there, I have Linode64 and uses Debian stable as well - but I haven't experience problems such as yours. Installing packages was always a breeze.

Sorry couldn't really help there, but I thought you'll be interested to know it.


cheers, HS

untitled9 wrote:
so originally had a linode 96. whenever i installed packages the system would come to a standstill. in fact some web pages wouldn't even load until a package had finished installing.

so we upgrade to a 128 (for various reasons), but the same thing still happens. i try to do small updates at a time, but the problem is that whenever i run 1 cpu (and disk intensive?) process the whole systems seems to wait for it to finish.

i don't upgrade during peak hours, but i'm still worried that if someone hits one of my web pages during an update, they'll give up before it loads.

i've tried renicing dselect/apt-get, and that seems to help, but they just seem to "share" poorly in general.

has anyone else experienced this or am i crazy? is dselect/apt known for bring systems to a crawl? (i've usually a slackware user)

or here's my crazy theory: is it a UML process scheduling problem? it just seems like the process scheduler lets intensive processes hog the system while light-weights just have to wait for the big boys to finish.

has anyone (caker? ;) ) experimented with UML's process scheduling? i know that process scheduling in general is a hot topic in the linux kernel.

thanks in advance for any insight!


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 9:12 am 
Offline
Senior Member

Joined: Sun Mar 14, 2004 9:18 pm
Posts: 116
Website: http://michael.susens-schurter.com/
WLM: mschurter@yahoo.com
Yahoo Messenger: mschurter
Location: Peoria, IL
alright, i may have spoken too quickly before. it seems that it depends largely on how many packages i'm installing & how big they are. upgrading 3-6 simple packages (i.e. simple libraries) doesn't seem to affect my system much. upgrading all of my courier packages, big hit.

i've never looked at /proc/io_status before. in an idle state mine looks like:
Code:
io_count=14628131 io_rate=0 io_tokens=400000 token_refill=512 token_max=400000

care to give a short explanation of what these numbers mean? also, if IOs the bottleneck what can i tweak? is there a place in /proc to adjust file caching/buffering?

i've worked with linux for a long time, but never tweaked a kernel before. considering i may be doubling the number of users on my linode soon, i feel like i should get educated!


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 9:19 am 
Offline
Senior Member
User avatar

Joined: Mon Jun 23, 2003 1:25 pm
Posts: 260
Code:
io_count=14628131 io_rate=0 io_tokens=400000 token_refill=512 token_max=400000


io_count, is the number of IO operations since boot.

io_rate, is the current IO usage
io_tokens, is the number of io tokens you have available
token_refill, is how many tokens get added to the io_tokens count every second
token_max, is the maximum number of io tokens you can have.

It works the same way ip tables buckets work.

The only time you should really get IO problems is when you are hitting the swap alot. You can use vmstat is look at that.

To monitor IO usage which you are doing apt-get use

watch cat /proc/io_status

Adam


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 9:48 am 
Offline
Senior Member

Joined: Sun Mar 14, 2004 9:18 pm
Posts: 116
Website: http://michael.susens-schurter.com/
WLM: mschurter@yahoo.com
Yahoo Messenger: mschurter
Location: Peoria, IL
thanks for the help!

i ran watch -n 1 cat /proc/io_status while using dselect

i upgraded the following 6 packages:
adduser apt apt-utils libfreetype6 manpages slang1a-utf8

during the installation io_rate peaked around 8,000-10,000 for each package with a drop between packages.

and then when it erased previously downloaded .deb files at the end, io_rate hit an all time high of 11,000

when it was all said & done i still had over 280,000 io_tokens, so i think i'm imagining things.

but i do understand much better how IO works. thanks a lot Adam!


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 9:51 am 
Offline
Senior Member
User avatar

Joined: Mon Jun 23, 2003 1:25 pm
Posts: 260
I should just add that the io limiter is unqiue to UML and was written by caker, to stop a single linode dosing a host.

Adam


Top
   
 Post subject:
PostPosted: Wed Aug 18, 2004 3:48 pm 
Offline
Senior Member

Joined: Fri Aug 06, 2004 5:49 pm
Posts: 158
My Linode came to a crawl while emerging the newest update of glibc, but that's the only package so far among 20-30 packages I've installed that really hung the server. I should of expected that though since I've only got a Linode 64. In the middle of install, about half of my swap (256MB) was being used. I'm sure I was hitting that IO limiter killing my own Linode for a while.


Top
   
 Post subject:
PostPosted: Fri Oct 22, 2004 11:27 am 
Offline
Junior Member

Joined: Wed Jul 21, 2004 5:15 pm
Posts: 25
So at what reate do the different lnodes refill?
mine says I refill at 512 and I have a 96

Is there a same constraint over cpu usage?


Top
   
 Post subject:
PostPosted: Fri Oct 22, 2004 12:12 pm 
Offline
Senior Member

Joined: Fri Aug 06, 2004 5:49 pm
Posts: 158
Yes, they all refill at the same rate, see:
http://www.linode.com/forums/viewtopic.php?t=1231


Top
   
 Post subject:
PostPosted: Mon Dec 06, 2004 3:01 am 
Offline

Joined: Mon Dec 06, 2004 2:47 am
Posts: 1
I've been having some trouble with a large import and when packing an overstuffed Zope database. Once io_tokens reaches negative numbers the system just crawls. To avoid hitting negative numbers I create a batch file that temporarily suspends the offending process, sleeps for a few minutes, continues process for a minute, repeat.

Code:
kill -STOP 1622
sleep 300
kill -CONT 1622
sleep 90
sh throttle


    1622 is the pid of the process.
    Throttle is the name of the script.

The end result may not be any faster, but it prevents the server from bottoming out and it makes me feel better, psychologically. YMMV.


Top
   
 Post subject:
PostPosted: Mon Dec 06, 2004 10:21 am 
Offline
Senior Member

Joined: Sun Mar 14, 2004 9:18 pm
Posts: 116
Website: http://michael.susens-schurter.com/
WLM: mschurter@yahoo.com
Yahoo Messenger: mschurter
Location: Peoria, IL
A great little script, but if you're having that many problems hitting your IO limiter, I'd look into a hardware upgrade.


Top
   
 Post subject:
PostPosted: Mon Dec 06, 2004 4:38 pm 
Offline
Senior Member

Joined: Thu Aug 28, 2003 12:57 am
Posts: 273
ksmith99 wrote:
I've been having some trouble with a large import and when packing an overstuffed Zope database. Once io_tokens reaches negative numbers the system just crawls. To avoid hitting negative numbers I create a batch file that temporarily suspends the offending process, sleeps for a few minutes, continues process for a minute, repeat.

Code:
kill -STOP 1622
sleep 300
kill -CONT 1622
sleep 90
sh throttle


    1622 is the pid of the process.
    Throttle is the name of the script.
The end result may not be any faster, but it prevents the server from bottoming out and it makes me feel better, psychologically. YMMV.


Won't that script run sh recursively indefinitely? So you'll get a new sh process every time through the loop? Why not just do:

Code:

#!/bin/sh

while true; do
  kill -STOP $1
  sleep 300
  kill -CONT $1
  sleep 90
done



Then you can run the script against a target pid, say like "throttle.sh 1234" ...


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group