So you have a LAN with 50+ users and you set up a nice Squid w3cache as a transparent proxy with 100GB of space reserved for the cache (hdds are so cheap nowadays…). Weeks pass and suddenly you notice that something is messing up your web experience as Firefox suddenly decides to run painfully slow. About 30 minutes wasted on finding the culprit (like changing your DNS servers, clearing browser cache, etc.) until you decide to check the router and then the Squid with its logs. And then you find something fishy:
2007/01/01 17:51:19| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:35| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:51| WARNING! Your cache is running out of filedescriptors
(...)
I won’t be explaining why this happens. Others have done it before. What I’m going to do is present you with a solution that does not require a complete Squid recompilation/reinstallation procedure.
RedHat/Fedora
/etc/init.d/squid stop
nano /etc/squid/squid.conf
max_filedesc 4096
nano /etc/init.d/squid
# add this just after the comments (before any script code)
ulimit -HSn 4096
/etc/init.d/squid start
Debian
nano /etc/defaults/squid
SQUID_MAXFD=4096
/etc/init.d/squid restart
Ubuntu
nano /etc/default/squid
SQUID_MAXFD=4096
/etc/init.d/squid restart
And now watch the /var/log/squid/cache.log
for a similar line:
2007/01/01 18:32:27 With 4096 file descriptors available
If it still says 1024 file descriptors available
(or similarly low value) you are out of luck (or you’ve just messed something up).