http://hi.baidu.com/whhzthfnayhntwe/item/a4f9ae056f08b012cc34eadc
Ruby Web Spidering and Data extraction
Anemone:
http://anemone.rubyforge.org
Example:
Anemone.crawl("http://www.example.com/") do |anemone|
anemone.on_every_page do |page|
puts page.url
end
end
Anemone提供了5个动词
after_crawl - 在crawl完了之后在所有抓取到的页面上运行一个block
focus_crawl - 用一个block去选择每个页面跟随哪些链接
on_every_page - 对每个页面运行block
on_pages_like - 给定一个模式,URL匹配的页面才运行block
skip_links_like - 跳过URL模式匹配的页面
每个page对象包含下面属性
url - page的URL
aliases - 重定位到这个page的URI,或这个page重定位到的页面
headers - HTTP响应头部信息
code - HTTP响应码
doc - 页面的Nokogiri::HTML::Document
links - 页面上的指向同样域名的所有URL数组
---------------------------------------------------------------------------
Mechanize:
http://mechanize.rubyforge.org
examples:
require 'rubygems'
require 'mechanize'
#创建实例
agent = Mechanize.new
#加载网页
page = agent.get("http://www.inruby.com")
#使用Mechanize::Page方法
page.title
page.content_type
page.encoding
page.images
page.links
page.forms
page.frames
page.iframes
page.labels
signup_page = page.link_with(:href =>/signup/).click
#使用Mechanize::Form
u_form = signup_page.form_with(:action =>/users/)
u_form['user[login]'] = 'maiaimi'
u_form['user[password]'] = 'maiami'
u_form['user[password_confirmation]'] = 'maiami'
u_form.submit
---------------------------------------------------------------------------------------------
#example2
This is an example of how to access a login protected site with WWW ::Mechanize. In this example, the login form has two fields named user and password. In other words, the HTML contains the following code:
1 <input name="user" .../>
2 <input name="password" .../>
Note that this example also shows how to enable WWW ::Mechanize logging and how to capture the HTML response:
1 require 'rubygems'
2 require 'logger'
3 require 'mechanize'
4
5 agent = WWW::Mechanize.new{|a| a.log = Logger.new(STDERR) }
agent = Mechanize.new({|a| a.log = Logger.new(STDERR)}
6 #agent.set_proxy('a-proxy', '8080')
7 page = agent.get 'http://bobthebuilder.com'
8
9 form = page.forms.first
10 form.user = 'bob'
11 form.password = 'password'
12
13 page = agent.submit form
14
15 output = File.open("output.html", "w") { |file| file << page.body }
Use the search method to scrape the page content. In this example I extract all text contained by span elements, which in turn are contained by a table element having a class attribute equal to ‘list-of-links’:
1 puts page.search("//table[@class='list-of-links']//span/text()") # do |row|
Mechanize Tips
1. agent alias
irb(main):071:0> Mechanize::AGENT_ALIASES.keys
=> ["Mechanize", "Linux Firefox", "Mac Mozilla", "Linux Mozilla", "Windows IE 6", "iPhone", "Linux Konqueror", "Windows IE 7", "Mac FireFox", "Mac Safari", "Windows Mozilla"]
2. reassign Mechanize's html parser
Mechanize.html_parser = Hpricot
agent = Mechanize.new
agent.user_agent_alias = 'Windows IE 7'
Ruby Web Spidering and Data extraction
Anemone:
http://anemone.rubyforge.org
Example:
Anemone.crawl("http://www.example.com/") do |anemone|
anemone.on_every_page do |page|
puts page.url
end
end
Anemone提供了5个动词
after_crawl - 在crawl完了之后在所有抓取到的页面上运行一个block
focus_crawl - 用一个block去选择每个页面跟随哪些链接
on_every_page - 对每个页面运行block
on_pages_like - 给定一个模式,URL匹配的页面才运行block
skip_links_like - 跳过URL模式匹配的页面
每个page对象包含下面属性
url - page的URL
aliases - 重定位到这个page的URI,或这个page重定位到的页面
headers - HTTP响应头部信息
code - HTTP响应码
doc - 页面的Nokogiri::HTML::Document
links - 页面上的指向同样域名的所有URL数组
---------------------------------------------------------------------------
Mechanize:
http://mechanize.rubyforge.org
examples:
require 'rubygems'
require 'mechanize'
#创建实例
agent = Mechanize.new
#加载网页
page = agent.get("http://www.inruby.com")
#使用Mechanize::Page方法
page.title
page.content_type
page.encoding
page.images
page.links
page.forms
page.frames
page.iframes
page.labels
signup_page = page.link_with(:href =>/signup/).click
#使用Mechanize::Form
u_form = signup_page.form_with(:action =>/users/)
u_form['user[login]'] = 'maiaimi'
u_form['user[password]'] = 'maiami'
u_form['user[password_confirmation]'] = 'maiami'
u_form.submit
---------------------------------------------------------------------------------------------
#example2
This is an example of how to access a login protected site with WWW ::Mechanize. In this example, the login form has two fields named user and password. In other words, the HTML contains the following code:
1 <input name="user" .../>
2 <input name="password" .../>
Note that this example also shows how to enable WWW ::Mechanize logging and how to capture the HTML response:
1 require 'rubygems'
2 require 'logger'
3 require 'mechanize'
4
5 agent = WWW::Mechanize.new{|a| a.log = Logger.new(STDERR) }
agent = Mechanize.new({|a| a.log = Logger.new(STDERR)}
6 #agent.set_proxy('a-proxy', '8080')
7 page = agent.get 'http://bobthebuilder.com'
8
9 form = page.forms.first
10 form.user = 'bob'
11 form.password = 'password'
12
13 page = agent.submit form
14
15 output = File.open("output.html", "w") { |file| file << page.body }
Use the search method to scrape the page content. In this example I extract all text contained by span elements, which in turn are contained by a table element having a class attribute equal to ‘list-of-links’:
1 puts page.search("//table[@class='list-of-links']//span/text()") # do |row|
Mechanize Tips
1. agent alias
irb(main):071:0> Mechanize::AGENT_ALIASES.keys
=> ["Mechanize", "Linux Firefox", "Mac Mozilla", "Linux Mozilla", "Windows IE 6", "iPhone", "Linux Konqueror", "Windows IE 7", "Mac FireFox", "Mac Safari", "Windows Mozilla"]
2. reassign Mechanize's html parser
Mechanize.html_parser = Hpricot
agent = Mechanize.new
agent.user_agent_alias = 'Windows IE 7'
179

被折叠的 条评论
为什么被折叠?



