2007年10月26日
iptables网关(linuxBox)和squid(squidBox)不在同一台机器的时候,作透明代理。
iptables网关(linuxBox)和squid(squidBox)不在同一台机器的时候,作透明代理。
需要在linuxBox上把相应的包转发到squid机器上,在linuxBox上作如下命令即可
2007年8月27日
squid清空缓存
直接删除cache下的目录好像不是很正确,squid网站上的faq是这么说的:
squid清空缓存
# squid -k kill or squid -kshutdown
# echo "" >/usr/local/squid/var/cache/swap.state
# echo "" >/var/spool/squid/swap.state
如果你有多个cache目录,请为每个cache目录下的swap.state文件执行上门的命令
不能直接删除swap.state命令,或者是把这个文件清为0,需要写一些垃圾数据到这个文件中
比如向上面介绍的方法
然后重启squid,就可以重建缓存目录了
2007年7月16日
一点测试经验:(2次测试的结果不一致)
mc1为squid服务器,做dlbe的缓存。
第一次测试:
在客户上面访问mc1,http://download.it.com.cn/about.html,则此页面被存储在了mc1上,客户再次访问同样的页面,就直接从mc1读,mc1不会再连接dlbe读取,除非缓存时间到期。
在dlbe上把about.html文件做更改,客户端访问的文件仍然是旧的。因为目前squid的设置没有对每次请求都去dlbe服务器看时间戳有无改变。直到客户端按crtl+F5强制刷新,mc1才会再次上dlbe读文件,客户端才看到最新版的文件。
第二次测试:客户端第一次读取about.html后,在dlbe上更改about.html的内容,在客户端刷新后,发现客户端的内容已经随之改变。
在mc1上netstat -an查看,发现已经有mc1连接到dlbe:80,
tcpdump抓包发现,在about.html的内容不改变的时候在客户端刷新页面,抓出来的包只有很少一点,而在dlbe改变了about.html的内容,则抓出来的包就比较多了。说明在about.html文件没有更改的时候mc1只是去读了时间戳,发现没有改变,就不去读文件的内容。而一旦发现文件时间戳改变读整个文件到mc1缓存。(这次的测试结果比较满意)
==================================================
南方页面缓存从squid2.5升级到squid 2.6 配置文件从2.5拷贝过来的,
如下是略做更改后的配置,
http_port 59.42.241.101:80 vhost vport
#httpd_accel_host virtual
#httpd_accel_port 80
#httpd_accel_with_proxy on
#httpd_accel_uses_host_header on
但是却提示 Unable to forward this request at this time
原因:目前使用的是加速模式accelerator,应该改为透明模式transparent
参考文章如下:
tor 2006-09-21 klockan 16:39 +0800 skrev Fala wang:
> I read the bug of 1650 is dealed on versionSquid2.6STABLE3, but I
> still got this problem. Detail error info asfollowing:
> The following error was encountered:
> * Unable to forward this request at thistime.
This says Squid is not allowed to go direct (never_direct, or
accelerator mode), and there is no parent proxy available.
> visible_hostname test98
> htcp_port 4827
> http_port 80 transparent vhost vport
You can not mix both transparent interception mode (transparent)and
accelerator mode (vhost / vport / defaultsite) on the samehttp_port.
If this is a transparently intercepting proxy then use onlytransparent.
It this is an web server accelerator then use only vhost anddefaultsite.
Your probably do not want vport in either configuration.
2007年7月16日
squid 的日志格式
logformat combined%>a
%>a
%ui
%un
[%{%d/%b/%Y:%H:%M:%S +0800}tl] 时间
%rm 请求方式(get/post)
%ru 要请求的url
HTTP/%rv 请求协议版本
%Hs http状态代码
%<st 返回字节数目
%{Referer}>h
"%{User-Agent}>h" 用户代理
2006年11月29日
squid本机上检查/etc/hosts文件
如果本机没有软件,要连到其他机器取文件,其他机器需要检查
/s0/download/下面域名的连接
/etc/hosts
dlfe上的域名连接
2006年11月7日
squid-cache日志提示
2006/11/07 09:13:27| comm_udp_sendto: FD 5, 219.136.248.121, port53: (1) Operation not permitted
2006/11/07 09:13:27| idnsSendQuery: FD 5: sendto: (1) Operation notpermitted
重新启动iptables
2006年10月25日
1。
巧用tmpfs加速你的linux服务器
今天从朋友高春辉那里又学了一招,就是使用tmpfs,我把他消化后用来实现虚拟磁盘来存放squid的缓存文件和php的seesion。速度快不少哦!
默认系统就会加载/dev/shm ,它就是所谓的tmpfs,有人说跟ramdisk(虚拟磁盘),但不一样。象虚拟磁盘一样,tmpfs可以使用您的 RAM,但它也可以使用您的交换分区来存储。而且传统的虚拟磁盘是个块设备,并需要一个 mkfs之类的命令才能真正地使用它,tmpfs 是一个文件系统,而不是块设备;您只是安装它,它就可以使用了。
tmpfs有以下优势:
1。动态文件系统的大小,
2。tmpfs 的另一个主要的好处是它闪电般的速度。因为典型的 tmpfs 文件系统会完全驻留在 RAM中,读写几乎可以是瞬间的。
3。tmpfs 数据在重新启动之后不会保留,因为虚拟内存本质上就是易失的。所以有必要做一些脚本做诸如加载,绑定的操作。
好了讲了一些大道理,大家看的烦了吧,还是讲讲我的应用吧:)
首先在/dev/stm建个tmp文件夹,然后与实际/tmp绑定
mkdir /dev/shm/tmp
chmod 1777 /dev/shm/tmp
mount --bind /dev/shm/tmp /tmp
2。日志里很多WARNING: Disk space over limit: 184918572 KB> 16384000 KB
网上查得资料
tor 2003-06-12 klockan 10.55 skrev IKEDA Shigeru:
> -JhAzEr-<catsedp@cats.com.ph> writes:
>
> | 2003/06/12 15:36:37| WARNING: Disk space overlimit: -7816424 KB > 2048000 KB
> | 2003/06/12 15:36:48| WARNING: Disk space overlimit: -7816424 KB > 2048000 KB
> | 2003/06/12 15:36:59| WARNING: Disk space overlimit: -7816424 KB > 2048000 KB
> | 2003/06/12 15:37:10| WARNING: Disk space overlimit: -7816424 KB > 2048000 KB
> |
> | what does this mean? i thought it automaticallyflushes old cache to make room
> | for the new ones.
> I get same messages almost every time I stop& start squid.
> I'm using squid-2.5.STABLE3 on OpenBSD-3.2/i386. Igot same warning
> with STABLE2 as well.
This may be seen if swap.state has been corrupted. Such corruptioncan
occur on unexpected system shutdowns (power failure, kernel panicetc).
Try the following:
1. Shut down squid.
2. Remove the swap.state files from your cache directories.
3. Start Squid again. It will slowly rebuild swap.state from thecache
files.
问题3
网络上的说法:增加配置时候的 --enable-async-io=
产生这个提示的原因:
I had a weird "queueing" problem with EXT3 and ReiserFS.
From time to time, a disk started a write operation (monitored viaiostat)
which lasted some times up to 20 seconds. When these ' disk flush'happened,
the system just stall; waiting for this disk queue emptying,blocking every
disk I/O. Meanwhile, the *squid operations* got queued, generatingthe
warning.
This was really annoying, since when these disk flushes happened,the cache
stopped responding.
When I put my cache on a XFS partition, things ran just GREAT. Areal
improvement in disk i/o performance. I have 17 disks; 16 for cache(the OS
disk is running EXT3). After changing this, I got rid of thisproblem. And
these disk flushes never happened again.
aio_queue_request: WARNING - Queue congestion (which means therequest queue
to async is getting larger than it "should" be.) is just a WARNING, so
don't worry abt much unless otherwise you get a"aio_queue_request:
WARNING - Disk I/O overloading ".
To overcome "Disk I/O overloading" try to increase the number ofthreads.
Otherwise, add another drive.
This relates to the number of simultaneous requests squid ishandling
-- I'm assuming you are using AUFS.
Basically, the IO threads are not processing "fast enough" and theio
request queue is getting long. "Fast enough" is a metric definedby
squid (it's in store_dir.c file I think, and there isn't much
commentary on where the heuristics originate from).
You can potentially alleviate this by increasing the number ofIO
threads at compile time - but it depends on how much diskactivity
you are seeing. A quick look at iostat (or sar data) correlatedto
the queue congestion messages should be enough to tell.
If the disks aren't saturated, I'd say you could increase thenumber
of threads to at least 48 (depending on hardware), but that'snot
much more than the 36 you seem to have already. I don't knowthe
maximum number of threads you can really throw at the problem,but
you can obviously experiment.
If your disks are overloaded, there won't be much you can do(aside
from adding more spindles, or more RAM). File system and kernelio
tuning may yield small gains, but it won't solve the coreproblem.
问题4 关于文件描述符file descriptors
.configure之后 再
vi /home/htm/squid-2.5.STABLE10-20050825/include/autoconf.h
2006年8月28日
Aug 28 09:13:11 down2 (squid): ipcache_init: DNS name lookup testsfailed.
Aug 28 09:13:11 down2 squid[1224]: Squid Parent: child process 2356exited due to signal 6
启动squid可以,在进程里面可以看见进程
/s0/squid/sbin/squid
(squid)
但是不能把8080端口绑定到ip上监听。
解决:改变启动方式为/s0/squid/sbin/squid -D
才解决了问题 ,有如下进程
[root@down2 logs]# ps aux |grepsquid
root
squid
squid
root
2006年
Problem:
#/usr/local/squid/sbin/squid -k reconfigure
squid: ERROR: No running copy
Resolve:
#ps -aux|grep squid
nobody
nobody
root
#cat /etc/squid/squid.conf |grep "squid.pid"
pid_filename /var/run/squid.pid
#echo /var/run/squid.pid
5049
????????
#vi /etc/squid/squid.conf
修改/var/run/squid.pid为
pid_filename /var/run/squid.pid
此为默认的
#echo "2487" >/usr/local/squid/var/logs/squid.pid
#/usr/local/squid/sbin/squid -k reconfigure
from: http://blog.sina.com.cn/s/blog_45b28bfb0100f9l2.html