Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject: Very high disk i/o?
PostPosted: Wed Feb 17, 2010 2:13 pm 
Offline
Senior Newbie

Joined: Wed Feb 17, 2010 1:50 pm
Posts: 6
Location: Upper Midwest
My server was experiencing very high disk i/o for a several hours last night, and was not very responsive in this time.

When it started, I checked out what was running and I didn't see anything other than the usual apache processes. Load was spiking too, though it wasn't clear why. It got better though, but then the issue came back overnight.

I see

ip_conntrack: table full, dropping packet.

in the log several times, and then it rebooted last night. I didn't reboot it, and hadn't in about 60 days.

Does this sound like a security issue? Syn flood? Any tips are appreciated.


Top
   
 Post subject:
PostPosted: Wed Feb 17, 2010 10:53 pm 
Offline
Senior Newbie

Joined: Wed Feb 17, 2010 1:50 pm
Posts: 6
Location: Upper Midwest
Argh, I figured it out: I had 13.6 GB of Apache log files on a server with a 15GB disk. Well, time to ... do ... something!


Top
   
 Post subject:
PostPosted: Thu Feb 18, 2010 8:00 am 
Offline
Senior Member

Joined: Sun Aug 02, 2009 1:32 pm
Posts: 222
Website: https://www.barkerjr.net
Location: Connecticut, USA
The logrotate package will rotate the logs weekly (configurable).


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 12:09 am 
Offline
Senior Newbie

Joined: Wed Feb 17, 2010 1:50 pm
Posts: 6
Location: Upper Midwest
Sure, I have it set to rotate logs daily actually. I like to save the logs for our records, so I have it set to keep up to a full year of logs.

Each day's access.log is about 1.5GB uncompressed, which goes down to 100MB. So, they sure do take up a lot of space when you have a few months worth!


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 12:30 am 
Offline
Senior Member

Joined: Sat Feb 14, 2009 1:32 am
Posts: 123
marshmallow wrote:
Sure, I have it set to rotate logs daily actually. I like to save the logs for our records, so I have it set to keep up to a full year of logs.

Each day's access.log is about 1.5GB uncompressed, which goes down to 100MB. So, they sure do take up a lot of space when you have a few months worth!


I am biased since my main job is in computer security, but I like to keep some uncompressed logs around. How do you deal with reviewing the files when you need to? The only thing I can really think of is using some odd command line kung-fu like:

Code:
tar -xOzf logfile.tgz | grep "search string"


I can see that being a pain for large files. Perhaps there is a way to leave 7 days uncompressed and compress anything after that? Any thoughts?


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 5:12 am 
Offline
Senior Member
User avatar

Joined: Sun Feb 08, 2004 7:18 pm
Posts: 562
Location: Austin
Have you tried
Code:
zgrep "search string" *.gz
?


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 11:07 am 
Offline
Senior Member

Joined: Sat Feb 14, 2009 1:32 am
Posts: 123
Xan wrote:
Have you tried
Code:
zgrep "search string" *.gz
?


I have not. That did work though.


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 12:59 pm 
Offline
Senior Member
User avatar

Joined: Sun Feb 08, 2004 7:18 pm
Posts: 562
Location: Austin
Cool. There's zcat as well, whose function you can probably guess. Also bzcat and bzgrep.


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 3:53 pm 
Offline
Senior Member

Joined: Fri Dec 07, 2007 1:37 am
Posts: 385
Location: NC, USA
Also, less is pretty good at figuring out how to display most compressed files.


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 3:58 pm 
Offline
Senior Member
User avatar

Joined: Sun Feb 08, 2004 7:18 pm
Posts: 562
Location: Austin
Huh, the less that I'm using doesn't seem to do that, but zless and bzless work.


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 4:37 pm 
Offline
Senior Member

Joined: Fri Dec 07, 2007 1:37 am
Posts: 385
Location: NC, USA
Xan wrote:
Huh, the less that I'm using doesn't seem to do that, but zless and bzless work.

Yeah, now that I look at it it may be a gentoo-specific thing. The functionality seems to be enabled by
Code:
export LESSOPEN='|lesspipe.sh %s'

and then a fairly substantial script in /usr/bin/lesspipe.sh

Not sure why other distros wouldn't be using it though, it is pretty handy.


Top
   
 Post subject:
PostPosted: Fri Feb 19, 2010 10:46 pm 
Offline
Senior Newbie

Joined: Wed Feb 17, 2010 1:50 pm
Posts: 6
Location: Upper Midwest
carmp3fan wrote:
I can see that being a pain for large files. Perhaps there is a way to leave 7 days uncompressed and compress anything after that? Any thoughts?


I usually do use zcat|grep.

I think you could set logrotate to leave 7 days uncompressed and compressing what is after that, but I'm not certain if that's built in as an option.

Usually it is set to change the compress logname.1 as logname.2.gz, move the old log to logname.1, and create a new current log fie. So, yesterdays log is left uncompressed by default until the next day. I think it does that in case a process is still writing to it, though.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 3 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group