So for our application:
Basically, you have:
1) Essential changes for squid-2.6.X or earlier are:
hierarchy stoplist cgi-bin acl QUERY urlpath_regex cgi-binThese two changes allow queries with ? in the URL to be cached. By default squid does not cache dynamic web pages. Since the Frontier information is generated by TOMCAT, technically they are dynamic web pages. Without these two changes, Frontier won't work. (The default values of these two lines are:
hierarchy stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \?so it's not much.)
Things have changed a little starting from squid-2.7.X. The "hierarchy stoplist" line still needs to be changed, but instead of the
acl QUERY urlpath_regex cgi-bin \?line there is a line:
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0that needs to be changed to:
refresh_pattern -i /cgi-bin/ 0 0% 02) Hardware dependent changes
cache_mem cache_dirYou have to define a parameter cache_dir which tells the squid where to keep the information on disk and how large it should be. This should be at least 20000 megabytes but probably no more than 70% of the partition size (to allow room for log files and other uses). The other hardware parameter is cache_mem. The default cache_mem is only 8 MB and is probably a very old default and should be increased some, but we have found that squid performs better for large objects out of the disk cache than the memory cache. We recommend at most 1/8 of the physical RAM and no more than 128 MB, leaving a lot of memory for disk buffering.
3) Tuning changes
maximum_object_size 1048576 KB !or whatever is necessary maximum_object_size_in_memory 128 KB icp_port 0 !this just disables icpThe first item lets us cache objects up to 1 GB in size (which is a lot more than most web pages - the default is only 4 MB). To date, we have often cached tarballs up to 300 MB in size, and more might be needed. The second item lets us use the cache_mem we gave it above (the default is only 8 KB). icp is used to have peer caches communicate. If you have just one squid, it is safer just to turn it off.
An important option for squid-2.6 and later is:
collapsed_forwarding onThis option combines requests agressively so that a file is retrieved only once from the origin server. This is a very good idea for computer farms so make sure it is on. Finally, if your squid might possibly feed other squids then set this:
ignore_ims_on_miss onThe default for that option prevents caching when an upstream squid sends an If-Modified-Since request and the object isn't already cached. 4) Log file changes
strip_query_terms off cache_store_log nonewe do this for Frontier, but not essential
Since the log files can get very big, we run a cron job every night to rotate the log files, keeping 10 days worth. The cron job runs a script that looks like:
#!/bin/bash SQUID_DIR=/nthome/bjb/frontier/frontier-cache/squid FNCRON_DIR=/nthome/bjb/frontier/frontier-cache/utils/cron $SQUID_DIR/sbin/squid -k rotate 2>&1 >> $FNCRON_DIR/daily.logYou could also use a logrotate.d script.
Daily might not be enough, however. On heavily used squids we run another cron hourly that checks to see if access.log is greater than a chosen size and if so does an extra rotation.
That's basically it, except for one thing. For Frontier we monitor our squids remotely using SNMP. This is not turned on by default in squid, so to use it you have to turn it on at compilation time. If you want to use SNMP there are a few more settings needed in the squid.conf, especially the ACL access for whatever machines are allowed to read the SNMP information. Scientific Linux/Redhat Enterprise Linux squid rpms should already have SNMP enabled at compilation time. Therefore, it should be possible to enable monitoring by adding something like the following to your squid.conf:
snmp_port 3401 acl HOST_MONITOR src 131.225.240.232/32 127.0.0.1/32 frontier.cern.ch acl HOST_MONITOR_NAME srcdomain cmsdbsfrontier.cern.ch acl snmppublic snmp_community public snmp_access allow snmppublic HOST_MONITOR snmp_access allow snmppublic HOST_MONITOR_NAME snmp_access deny allall in the appropriate places in squid.conf. You may also need to open firewall and/or iptables holes for the addresses on the HOST_MONITOR line above. The 131.* address is for the monitor at Fermilab and the cern.ch names are for the monitor at CERN.
One thing you can do, is make a dummy installation of our tarball. It can be installed anywhere by any user. Then do a diff of your squid.conf with our squid.conf. For startup and shutdown procedures, you are on your own.
The compilation options we currently use are (Squid-2.6STABLE18 or later):
--disable-wccp --enable-snmp --disable-ident-lookups --with-large-files
(If you have a 64-bit OS, don't use --with-large-files)
Hardware
The first step is to decide what hardware you want to run the squid cache server on. These are some FAQ's1) Do I need to dedicate a node to squid and only squid?
This is up to you. It is a good idea. It depends on how many jobs try to access the squid simultaneously and what else the machine is used for (see question 2). Large sites may need more than one squid(see question 4). The node needs to have network access to the internet, and be visible to the worker nodes.
2) What hardware specs (CPU, memory, diskcache)?
For most purposes 2GHZ, 2GB, 100 GB should be adequate. This excludes the space needed for log files which is determined by how heavily the system is used and what the clean up schedule is. We assume something like rotate the logs every day and remove after 10 days. From what we have seen, the most critical resource is the memory. If the machine serves other purposes, make sure the other tasks don't use up all the memory. We see 5-10 % performance gains when using a 64-bit architecture. Scientific Linux 4 should have better IO than Scientific Linux 3. Squid runs as a single thread, so if that is the only use of the machine, having more than 2 cores is a waste. You should also avoid AFS, NFS and RAID for the cache_dir.
Here is a description of squid memory usage: If you have a decent amount of spare memory, the kernel will use that as page cache, so it's a good chance that frequenty-requested items will, in fact, be served from RAM (via the page cache) even if it's not squid's RAM. There's also a design bottleneck in squid that limits cpu efficiency of large cache_mem objects, so resist the urge to give squid all your available memory. Let cache_mem handle your small objects and the kernel handle the larger ones.
3) What network specs (Gigabit if you have it)?
The latencies will be lower to the worker nodes if you have a large bandwidth. The network is always the bottleneck for this system, so "Gb if you got it" is the motto, but not an absolute requirement. If you have many job slots, 2 bonded gigabit network connections is even better, and squid on one core of a modern CPU can pretty much keep up with 2 gigabits. Squid is single-threaded so if you're able to supply more than 2 gigabits, multiple squid processes need to be used (consult with CMS frontier support for details if you want to try).
4) How many squids do I need?
Sites with over 500 job slots should have at least 2 squids for reliability. We currently estimate that sites should have one gigabit on a squid per 500-1000 CMS job slots. A lot depends on how quickly jobs start; an empty batch queue that suddenly fills up will need more squids. If you don't have gigabit ethernet you will be able to handle fewer job slots. The number of job slots that can be safely handled per gigabit increases as the number of slots increase because the chances that they all start at once tends to go down.
5) How should squids be load-balanced?
There are many ways to configure multiple squids: round-robin DNS, load-balancing networking hardware, LVS, etc. The simplest thing to do is just set up two or more squids independently and let Frontier handle it by making a small addition to site-local-config.xml (see below under Multiple Squid Servers). If there are many thousands of job slots, hardware-based load balancers can be easily overloaded so DNS-based or client-based load balancing will probably be called for.
Responsible: BarryBlumenfeld