Just throwing out a guess here, but you may have a process that's opening files and not closing them. There's a proc file to see how many files are open:
Code:
$ cat /proc/sys/fs/file-nr
2566 0 71806
The first number is the number of allocated handles (open files), the second is the number of unallocated file handles and the third number* is the maximum number of file handles. Watch the first number for a few days--or better yet, graph it with rrdtool--and see if it is growing over time. If so, you can use lsof to see what files are open to determine what's opening so many file handles. If you're like me and lazy, you probably don't want to look at a list of thousands of files to figure out what has all the files open. CLI to the rescue!
Code:
$ sudo lsof | awk '{print $1}' | sort | uniq -c | sort -n | tail
23 tlsmgr
24 cyrmaster
38 cron
43 smtpd
62 bash
101 sshd
203 tinyproxy
249 lmtpd
594 apache2
1457 imapd
By default, lsof shows more than just open files, but I can't remember how to turn the other off at the moment. The numbers should still give you an idea of which process is causing the problem.
If, after all of that, you find that it's not a runaway process, then you can increase the maximum number of open files very easily:
Code:
$ echo new_max_files > /proc/sys/fs/file-max
As far as I know, the only ramification of doing that is memory usage and the amount of time it takes to find an open file handle when opening a new file. I'm fairly sure that both are linear, but you may want to look into that.
Hope that helps you a little.
* On a 2.6 kernel the second number will always be zero.