Tailrank Architecture - Learn How to Track Memes Across the

Tailrank通过每小时索引超过2400万个博客及订阅源来跟踪互联网上的热点话题,每月处理52TB的原始博客内容。为实现这一目标,创始人Kevin Burton介绍了他们如何构建分布式系统以应对海量数据处理和保持数据一致性等挑战。
转自:[url]http://www.highscalability.com/tailrank-architecture-learn-how-track-memes-across-entire-blogosphere[/url]


Ever feel like the blogosphere is 500 million channels with nothing on? Tailrank finds the internet's hottest channels by indexing over 24M weblogs and feeds per hour. That's 52TB of raw blog content (no, not sewage) a month and requires continuously processing 160Mbits of IO. How do they do that?

This is an email interview with Kevin Burton, founder and CEO of Tailrank.com. Kevin was kind enough to take the time to explain how they scale to index the entire blogosphere.


Sites


Tailrank - We track the hottest news in the blogosphere!

Spinn3r - A blog spider you can specialize with your own behavior instead of creating your own.

Kevin Burton's Blog - his blog is an indexing mix of politics and technical talk. Both are always interesting.


Platform


MySQL

Java

Linux (Debian)

Apache

Squid

PowerDNS

DAS storage.

Federated database.

ServerBeach hosting.

Job scheduling system for work distribution.


Interview


What is your system is for?

Tailrank originally a memetracker to track the hottest news being discussed
within the blogosphere.

We started having a lot of requests to license our crawler and we shipped that
in the form of Spinn3r about 8 months ago.

Spinn3r is self contained crawler for companies that want to index the full
blogosphere and consumer generated media.

Tailrank is still a very important product alongside Spinn3r and we're working
on Tailrank 3.0 which should be available in the future. No ETA at the moment
but it's actively being worked on.


What particular design/architecture/implementation challenges does your system have?

The biggest challenge we have is the sheer amount of data we have to process and
keeping that data consistent within a distributed system.

For example, we process 52TB of content per month. this has to be indexed in a
highly available storage architecture so the normal distributed database
problems arise.


What did you do to meet these challenges?

We've spent a lot of time in building out a distributed system that can scale
and handle failure.

For example, we've built a tool called Task/Queue that is analogous to Google's
MapReduce. It has a centralized queue server which hands out units of work to
robots which make requests.

It works VERY well for crawlers in that slower machines just fetch work at a
slower rate while more modern machines (or better tuned machines) request work
at a higher rate.

This ends up easily solving one of the main distributed computing fallacies that
the network is homogeneous.

Task/Queue is generic enough that we could actually use it to implement
MapReduce on top of the system.

We'll probably open source it at some point. Right now it has too many
tentacles wrapped into other parts of our system.


How big is your system?

We index 24M weblogs and feeds per hour and process content at about
160-200Mbps.

At the raw level we're writing to our disks at about 10-15MBps continuously.


How many documents, do you serve? How many images? How much data?

Right now the database is about 500G. We're expecting it to grow well beyond
this in 2008 as we expand our product offering.


What is your rate of growth?

It's mostly a function of customer feature requests. If our customers want more data we sell it to them.

In 2008 we're planning on expanding our cluster to index larger portions of the
web and consumer generated media.


What is the architecture of your system?

We use Java, MySQL and Linux for our cluster.

Java is a great language for writing crawlers. The library support is pretty
solid (though it seems like Java 7 is going to be killer when they add
closures).

We use MySQL with InnoDB. We're mostly happy with it though it seems I end up
spending about 20% of my time fixing MySQL bugs and limitations.

Of course nothing is perfect. MySQL for example was really designed to be used
on single core systems.

The MySQL 5.1 release goes a bit farther to fix multi-core scalability locks.

I recently blogged about how these the new multi-core machines should really be
considered N machines instead of one logical unit: Distributed Computing Fallacy #9.


How is your system architected to scale?

We use a federated database system so that we can split the write load as we see
more IO.

We've released a lot of our code as Open Source a lot of our infrastructure and
this will probably be released as Open Source as well.

We've already opened up a lot of our infrastructure code:


http://code.tailrank.com/lbpool - load balancing JDBC driver for use with DB connection pools.

http://code.tailrank.com/feedparser - Java RSS/Atom parser designed to elegantly support all versions of RSS

http://code.google.com/p/benchmark4j/ - Java (and UNIX) equivalent of Windows' perfmon

http://code.google.com/p/spinn3r-client/ - Client bindings to access the Spinn3r web service

http://code.google.com/p/mysqlslavesync/ - Clone a MySQL installation and setup replication.

http://code.google.com/p/log5j/ - Logger facade that supports printf style message format for both performance and ease of use.


How many servers do you have?

About 15 machines so far. We've spent a lot of time tuning our infrastructure
so it's pretty efficient. That said, building a scalable crawler is not an easy
task so it does take a lot of hardware.

We're going to be expanding FAR past this in 2008 and will probably hit about
2-3 racks of machines (~120 boxes).


What operating systems do you use?

Linux via Debian Etch on 64 bit Opterons. I'm a big Debian fan. I don't know
why more hardware vendors don't support Debian.

Debian is the big secret in the valley that no one talks about. Most of the big
web 2.0 shops like Technorati, Digg, etc use Debian.


Which web server do you use?

Apache 2.0. Lighttpd is looking interesting as well.


Which reverse proxy do you use?

About 95% of the pages of Tailrank are served from Squid.


How is your system deployed in data centers?

We use ServerBeach for hosting. It's a great model for small to medium sized
startups. They rack the boxes, maintain inventory, handle network, etc. We
just buy new machines and pay a flat markup.

I wish Dell, SUN, HP would sell directly to clients in this manner.

One right now. We're looking to expand into two for redundancy.


What is your storage strategy?

Directly attached storage. We buy two SATA drives per box and set them up in
RAID 0.

We use the redundant array of inexpensive databases solution so if an individual
machine fails there's another copy of the data on another box.

Cheap SATA disks rule for what we do. They're cheap, commodity, and fast.


Do you have a standard API to your website?

Tailrank has RSS feeds for every page.

The Spinn3r service is itself an API and we have extensive documentation on the
protocol.

It's also free to use for researchers so if any of your readers are pursuing a
Ph.D and generally doing research work and needs access to blog data we'd love
to help them out.

We already have the Ph.D students at the University of Washington and University
of Maryland (my Alma Matter) using Spinn3r.


Which DNS service do you use?

PowerDNS. It's a great product. We only use the recursor daemon but it's FAST.
It uses async IO though so it doesn't really scale across processors on
multicore boxes. Apparenty there's a hack to get it to run across cores but it
isn't very reliable.

AAA caching might be broken though. I still need to look into this.


Who do you admire?

Donald Knuth is the man!


How are you thinking of changing your architecture in the future?

We're still working on finishing up a fully sharded database. MySQL fault
tolerance and autopromotion is also an issue.
一种基于有效视角点方法的相机位姿估计MATLAB实现方案 该算法通过建立三维空间点与二维图像点之间的几何对应关系,实现相机外部参数的精确求解。其核心原理在于将三维控制点表示为四个虚拟基点的加权组合,从而将非线性优化问题转化为线性方程组的求解过程。 具体实现步骤包含以下关键环节:首先对输入的三维世界坐标点进行归一化预处理,以提升数值计算的稳定性。随后构建包含四个虚拟基点的参考坐标系,并通过奇异值分解确定各三维点在该基坐标系下的齐次坐标表示。接下来建立二维图像点与三维基坐标之间的投影方程,形成线性约束系统。通过求解该线性系统获得虚拟基点在相机坐标系下的初步坐标估计。 在获得基础解后,需执行高斯-牛顿迭代优化以进一步提高估计精度。该过程通过最小化重投影误差来优化相机旋转矩阵和平移向量。最终输出包含完整的相机外参矩阵,其中旋转部分采用正交化处理确保满足旋转矩阵的约束条件。 该实现方案特别注重数值稳定性处理,包括适当的坐标缩放、矩阵条件数检测以及迭代收敛判断机制。算法能够有效处理噪声干扰下的位姿估计问题,为计算机视觉中的三维重建、目标跟踪等应用提供可靠的技术基础。 资源来源于网络分享,仅用于学习交流使用,请勿用于商业,如有侵权请联系我删除!
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值