Today I tested my biogine. It can deal with about 260 request at once. But it is so low for the requirement of our project. Why? In this project, the server may have to fact up to thounds request at one time.
After discussing on the problem now, we decided to study the source of nginx, yeah it is just the great http server. More and more application, the nginx prevailed over the apache.
It also means that it is hard to study its sources.
$ perl -MIO::Socket -e '$s=new IO::Socket::INET( LocalPort => 9999, Listen => 5 ); sleep(1) while 1'
$ perl -MIO::Socket -le 'foreach(1..100000){ $c=new IO::Socket::INET( PeerAddr => "127.0.0.1:9999" ); redo unless $c; push @c, $c; print }'
$ netstat -nat | grep EST | grep 9999 | wc -l
The above is code segment to test the backlog in an OS. That is from my a workmate.
From the testing, the main problem is the limit in OS. Now, we are all seeking a good and easy way to slove it.
We must improve the link number at the same time.
Maybe we can use Erlang to develop a server, and make a interface as a bridge to the BioOne algorithm. However, I don't know wether our leaders accept that way.