So you have a LAN with 50+ users and you set up a nice Squid w3cache as a transparent proxy with 100GB of space reserved for the cache (hdds are so cheap nowadays…). Weeks pass and suddenly you notice that something is messing up your web experience as Firefox suddenly decides to run painfully slow. About 30 minutes wasted on finding the culprit (like changing your DNS servers, clearing browser cache, etc.) until you decide to check the router and then the Squid with its logs. And then you find something fishy:
2007/01/01 17:51:19| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:35| WARNING! Your cache is running out of filedescriptors
2007/01/01 17:51:51| WARNING! Your cache is running out of filedescriptors
(...)
I won’t be explaining why this happens. Others have done it before. What I’m going to do is present you with a solution that does not require a complete Squid recompilation/reinstallation procedure.
RedHat/Fedora
/etc/init.d/squid stop
nano /etc/squid/squid.conf
max_filedesc 4096
nano /etc/init.d/squid
# add this just after the comments (before any script code)
ulimit -HSn 4096
/etc/init.d/squid start
Debian
nano /etc/defaults/squid
SQUID_MAXFD=4096
/etc/init.d/squid restart
Ubuntu
nano /etc/default/squid
SQUID_MAXFD=4096
/etc/init.d/squid restart
And now watch the /var/log/squid/cache.log
for a similar line:
2007/01/01 18:32:27 With 4096 file descriptors available
If it still says 1024 file descriptors available
(or similarly low value) you are out of luck (or you’ve just messed something up).
Surely the editing of /usr/include/bits/typesizes.h is unnecessary if you don’t recompile Squid (and anyway, that version of Squid doesn’t need to have that file changes anyway, as its configure script has a –maxfd option).
I was certainly able to up my FC6’s squid installation to use 2048 descriptors by changing just /etc/init.d/squid and /etc/squid/squid.conf
On Debian you can increase the number of fds by changing /etc/defaults/squid.
Thanks a bunch. Using /etc/defaults/squid in Squid is the most trivial place to find but somehow I looked in the wrong places. Pushing that from 1024 to 4096 instantly got my users happy. :)
Thanks for the tips! I’ve updated the article.
Thank’s guys. This is great … (I’ve had exactly the same problem, but resolved). Again thank’s …
Thanks mate. I was guessing for the worse, like someone is ddos-ing my squid.
Thanks alot. It helped me in IPCOP.
I used this command : SQUID_MAXFD=4096
while squid was stopped and then I started the squid and it works fine now.
Thanks a bunch again. Keep it Up Please.
Omygosh, thank you, thank you, thank you, this saved me and my users so much heartache!
Slight typo in the Debian instructions, the squid restart command should read:
/etc/init.d/squid restart
Fixed! Thanks :)
thanks man !!! save a lot of time :D
i dont find “/etc/init.d” and i use smoothwall …
help me …
Thank you so much…
I’m using IPCOP 1.4.21 + Advanced Proxy
add this line:
SQUID_MAXFD=8192
in /var/ipcop/proxy/advanced/acls/include.acl
and the warning lines are history…
any idea to set it on squid 3.+
max_filedesc 4096 does not work
current ubuntu 10.04 (lts) has a bug an so setting SQUID_MAXFD in /etc/default/squid does not work.
i changed the following line in squid.conf:
from: # max_filedescriptors 0
to: max_filedescriptors 8128
restarted squid, and it works:
2010/11/13 18:24:18| Starting Squid Cache version 2.7.STABLE7 for amd64-debian-linux-gnu…
2010/11/13 18:24:18| Process ID 3312
2010/11/13 18:24:18| With 8128 file descriptors available
ubuntu bug report:
https://bugs.launchpad.net/ubuntu/+source/squid/+bug/580590
thanks for the great tip, it sorted out our problem, 150+ local users, 400GB of cache space, hit the wall at 266G cached..